Sample records for limits multiresolution analyses

  1. Adaptive multi-resolution Modularity for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He

    2018-02-01

    Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.

  2. SHORT-TERM SOLAR FLARE PREDICTION USING MULTIRESOLUTION PREDICTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Huang Xin; Hu Qinghua

    2010-01-20

    Multiresolution predictors of solar flares are constructed by a wavelet transform and sequential feature extraction method. Three predictors-the maximum horizontal gradient, the length of neutral line, and the number of singular points-are extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. A maximal overlap discrete wavelet transform is used to decompose the sequence of predictors into four frequency bands. In each band, four sequential features-the maximum, the mean, the standard deviation, and the root mean square-are extracted. The multiresolution predictors in the low-frequency band reflect trends in the evolution of newly emerging fluxes. The multiresolution predictors in the high-frequencymore » band reflect the changing rates in emerging flux regions. The variation of emerging fluxes is decoupled by wavelet transform in different frequency bands. The information amount of these multiresolution predictors is evaluated by the information gain ratio. It is found that the multiresolution predictors in the lowest and highest frequency bands contain the most information. Based on these predictors, a C4.5 decision tree algorithm is used to build the short-term solar flare prediction model. It is found that the performance of the short-term solar flare prediction model based on the multiresolution predictors is greatly improved.« less

  3. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  4. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  5. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  6. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  7. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  8. Morphological filtering and multiresolution fusion for mammographic microcalcification detection

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-04-01

    Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.

  9. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  10. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  11. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  12. An ROI multi-resolution compression method for 3D-HEVC

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  13. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  14. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, P-T; Edelsbrunner, H; Hamann, B

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  15. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  16. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  17. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.

    2016-01-01

    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  18. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE PAGES

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...

    2016-01-01

    We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  20. Survey and analysis of multiresolution methods for turbulence data

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 512 3 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Re t = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisonsmore » between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less

  1. The Incremental Multiresolution Matrix Factorization Algorithm

    PubMed Central

    Ithapu, Vamsi K.; Kondor, Risi; Johnson, Sterling C.; Singh, Vikas

    2017-01-01

    Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices – an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct “global” factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision. PMID:29416293

  2. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  3. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  4. Spider-web inspired multi-resolution graphene tactile sensor.

    PubMed

    Liu, Lu; Huang, Yu; Li, Fengyu; Ma, Ying; Li, Wenbo; Su, Meng; Qian, Xin; Ren, Wanjie; Tang, Kanglai; Song, Yanlin

    2018-05-08

    Multi-dimensional accurate response and smooth signal transmission are critical challenges in the advancement of multi-resolution recognition and complex environment analysis. Inspired by the structure-activity relationship between discrepant microstructures of the spiral and radial threads in a spider web, we designed and printed graphene with porous and densely-packed microstructures to integrate into a multi-resolution graphene tactile sensor. The three-dimensional (3D) porous graphene structure performs multi-dimensional deformation responses. The laminar densely-packed graphene structure contributes excellent conductivity with flexible stability. The spider-web inspired printed pattern inherits orientational and locational kinesis tracking. The multi-structure construction with homo-graphene material can integrate discrepant electronic properties with remarkable flexibility, which will attract enormous attention for electronic skin, wearable devices and human-machine interactions.

  5. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  6. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  7. LOD map--A visual interface for navigating multiresolution volume visualization.

    PubMed

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  8. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  9. Multiresolution analysis of Bursa Malaysia KLCI time series

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  10. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  11. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  12. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  13. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  14. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  15. Multi-resolution extension for transmission of geodata in a mobile context

    NASA Astrophysics Data System (ADS)

    Follin, Jean-Michel; Bouju, Alain; Bertrand, Frédéric; Boursier, Patrice

    2005-03-01

    A solution is proposed for the management of multi-resolution vector data in a mobile spatial information visualization system. The client-server architecture and the models of data and transfer of the system are presented first. The aim of this system is to reduce data exchanged between client and server by reusing data already present on the client side. Then, an extension of this system to multi-resolution data is proposed. Our solution is based on the use of increments in a multi-scale database. A database architecture where data sets for different predefined scales are precomputed and stored on the server side is adopted. In this model, each object representing the same real world entities at different levels of detail has to be linked beforehand. Increments correspond to the difference between two datasets with different levels of detail. They are transmitted in order to increase (or decrease) the detail to the client upon request. They include generalization and refinement operators allowing transitions between the different levels. Finally, a framework suited to the transfer of multi-resolution data in a mobile context is presented. This allows reuse of data locally available at different levels of detail and, in this way, reduces the amount of data transferred between client and server.

  16. Quantitative analysis of vascular parameters for micro-CT imaging of vascular networks with multi-resolution.

    PubMed

    Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie

    2016-03-01

    Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.

  17. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  18. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  19. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  20. Design of near-field irregular diffractive optical elements by use of a multiresolution direct binary search method.

    PubMed

    Li, Jia-Han; Webb, Kevin J; Burke, Gerald J; White, Daniel A; Thompson, Charles A

    2006-05-01

    A multiresolution direct binary search iterative procedure is used to design small dielectric irregular diffractive optical elements that have subwavelength features and achieve near-field focusing below the diffraction limit. Designs with a single focus or with two foci, depending on wavelength or polarization, illustrate the possible functionalities available from the large number of degrees of freedom. These examples suggest that the concept of such elements may find applications in near-field lithography, wavelength-division multiplexing, spectral analysis, and polarization beam splitters.

  1. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  2. MR-CDF: Managing multi-resolution scientific data

    NASA Technical Reports Server (NTRS)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  3. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  4. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  5. Multisensor multiresolution data fusion for improvement in classification

    NASA Astrophysics Data System (ADS)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  6. Multiresolution MR elastography using nonlinear inversion

    PubMed Central

    McGarry, M. D. J.; Van Houten, E. E. W.; Johnson, C. L.; Georgiadis, J. G.; Sutton, B. P.; Weaver, J. B.; Paulsen, K. D.

    2012-01-01

    Purpose: Nonlinear inversion (NLI) in MR elastography requires discretization of the displacement field for a finite element (FE) solution of the “forward problem”, and discretization of the unknown mechanical property field for the iterative solution of the “inverse problem”. The resolution requirements for these two discretizations are different: the forward problem requires sufficient resolution of the displacement FE mesh to ensure convergence, whereas lowering the mechanical property resolution in the inverse problem stabilizes the mechanical property estimates in the presence of measurement noise. Previous NLI implementations use the same FE mesh to support the displacement and property fields, requiring a trade-off between the competing resolution requirements. Methods: This work implements and evaluates multiresolution FE meshes for NLI elastography, allowing independent discretizations of the displacements and each mechanical property parameter to be estimated. The displacement resolution can then be selected to ensure mesh convergence, and the resolution of the property meshes can be independently manipulated to control the stability of the inversion. Results: Phantom experiments indicate that eight nodes per wavelength (NPW) are sufficient for accurate mechanical property recovery, whereas mechanical property estimation from 50 Hz in vivo brain data stabilizes once the displacement resolution reaches 1.7 mm (approximately 19 NPW). Viscoelastic mechanical property estimates of in vivo brain tissue show that subsampling the loss modulus while holding the storage modulus resolution constant does not substantially alter the storage modulus images. Controlling the ratio of the number of measurements to unknown mechanical properties by subsampling the mechanical property distributions (relative to the data resolution) improves the repeatability of the property estimates, at a cost of modestly decreased spatial resolution. Conclusions: Multiresolution

  7. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    NASA Astrophysics Data System (ADS)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  8. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  9. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  10. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  11. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  12. Automated transformation-invariant shape recognition through wavelet multiresolution

    NASA Astrophysics Data System (ADS)

    Brault, Patrice; Mounier, Hugues

    2001-12-01

    We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.

  13. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    NASA Astrophysics Data System (ADS)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  14. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer.

    PubMed

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-10

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  15. A qualitative multiresolution model for counterterrorism

    NASA Astrophysics Data System (ADS)

    Davis, Paul K.

    2006-05-01

    This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.

  16. Multi-resolution model-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Camplani, Massimo

    2012-06-01

    In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.

  17. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  18. A multiresolution method for climate system modeling: application of spherical centroidal Voronoi tessellations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringler, Todd; Ju, Lili; Gunzburger, Max

    2008-11-14

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoimore » tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.« less

  19. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  20. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  1. Hair analyses: worthless for vitamins, limited for minerals.

    PubMed

    Hambidge, K M

    1982-11-01

    Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.

  2. Crack Identification in CFRP Laminated Beams Using Multi-Resolution Modal Teager–Kaiser Energy under Noisy Environments

    PubMed Central

    Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław

    2017-01-01

    Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016

  3. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  5. Time-frequency feature representation using multi-resolution texture analysis and acoustic activity detector for real-life speech emotion recognition.

    PubMed

    Wang, Kun-Ching

    2015-01-14

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  6. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  7. A multiresolution prostate representation for automatic segmentation in magnetic resonance images.

    PubMed

    Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2017-04-01

    Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical

  8. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2017-07-01

    Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.

  9. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  10. Multiresolution Local Binary Pattern texture analysis for false positive reduction in computerized detection of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Choi, Jae Young; Kim, Dae Hoe; Choi, Seon Hyeong; Ro, Yong Man

    2012-03-01

    We investigated the feasibility of using multiresolution Local Binary Pattern (LBP) texture analysis to reduce falsepositive (FP) detection in a computerized mass detection framework. A new and novel approach for extracting LBP features is devised to differentiate masses and normal breast tissue on mammograms. In particular, to characterize the LBP texture patterns of the boundaries of masses, as well as to preserve the spatial structure pattern of the masses, two individual LBP texture patterns are then extracted from the core region and the ribbon region of pixels of the respective ROI regions, respectively. These two texture patterns are combined to produce the so-called multiresolution LBP feature of a given ROI. The proposed LBP texture analysis of the information in mass core region and its margin has clearly proven to be significant and is not sensitive to the precise location of the boundaries of masses. In this study, 89 mammograms were collected from the public MAIS database (DB). To perform a more realistic assessment of FP reduction process, the LBP texture analysis was applied directly to a total of 1,693 regions of interest (ROIs) automatically segmented by computer algorithm. Support Vector Machine (SVM) was applied for the classification of mass ROIs from ROIs containing normal tissue. Receiver Operating Characteristic (ROC) analysis was conducted to evaluate the classification accuracy and its improvement using multiresolution LBP features. With multiresolution LBP features, the classifier achieved an average area under the ROC curve, , z A of 0.956 during testing. In addition, the proposed LBP features outperform other state-of-the-arts features designed for false positive reduction.

  11. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    PubMed Central

    Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund

    2012-01-01

    Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the

  12. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  13. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    NASA Astrophysics Data System (ADS)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  14. SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.

    PubMed

    Li, Dengwang; Yin, Yong

    2012-06-01

    In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.

  15. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  16. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  17. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    PubMed

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Probabilistic multi-resolution human classification

    NASA Astrophysics Data System (ADS)

    Tu, Jun; Ran, H.

    2006-02-01

    Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.

  19. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    DTIC Science & Technology

    2012-10-24

    representative pdf’s via the Kullback - Leibler divergence (KL). Species turnover, or b diversity, is estimated using both this KL divergence and the...multiresolution analysis provides a means for estimating divergence between two textures, specifically the Kullback - Leibler divergence between the pair of ...and open challenges. Ecological Informatics 5: 318–329. 19. Ludovisi A, TaticchiM(2006) Investigating beta diversity by kullback - leibler information

  20. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  1. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  2. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    DTIC Science & Technology

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  3. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different casemore » studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.« less

  4. Adaptive Diffeomorphic Multiresolution Demons and Their Application to Same Modality Medical Image Registration with Large Deformation

    PubMed Central

    Wang, Chang; Ren, Qiongqiong; Qin, Xin

    2018-01-01

    Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.

  5. Adaptive Diffeomorphic Multiresolution Demons and Their Application to Same Modality Medical Image Registration with Large Deformation.

    PubMed

    Wang, Chang; Ren, Qiongqiong; Qin, Xin; Yu, Yi

    2018-01-01

    Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.

  6. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling

  7. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2007-12-01

    We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  8. Multiresolution forecasting for futures trading using wavelet decompositions.

    PubMed

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  9. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  10. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  11. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  12. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  13. Multiresolution texture analysis applied to road surface inspection

    NASA Astrophysics Data System (ADS)

    Paquis, Stephane; Legeay, Vincent; Konik, Hubert; Charrier, Jean

    1999-03-01

    Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.

  14. Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Abdessetar, M.; Zhong, Y.

    2017-09-01

    Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).

  15. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  16. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  17. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where

  18. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  19. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  20. A multiresolution halftoning algorithm for progressive display

    NASA Astrophysics Data System (ADS)

    Mukherjee, Mithun; Sharma, Gaurav

    2005-01-01

    We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.

  1. The Multi-Resolution Land Characteristics (MRLC) Consortium - 20 Years of Development and Integration of U.S. National Land Cover Data

    EPA Science Inventory

    The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...

  2. Applying multi-resolution numerical methods to geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, David Rhodri

    structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher

  3. Wavelet bases on the L-shaped domain

    NASA Astrophysics Data System (ADS)

    Jouini, Abdellatif; Lemarié-Rieusset, Pierre Gilles

    2013-07-01

    We present in this paper two elementary constructions of multiresolution analyses on the L-shaped domain D. In the first one, we shall describe a direct method to define an orthonormal multiresolution analysis. In the second one, we use the decomposition method for constructing a biorthogonal multiresolution analysis. These analyses are adapted for the study of the Sobolev spaces Hs(D)(s∈N).

  4. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.

    2017-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both

  5. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  6. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  7. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  8. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  9. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where themore » masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.« less

  10. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  11. The multi-resolution land characteristics (MRLC) consortium–20 years of development and integration of USA national land cover data

    Treesearch

    James Wickham; Collin Homer; James Vogelmann; Alexa McKerrow; Rick Mueler; Nate Herold; John Coulston

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the...

  12. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime

    2017-04-01

    Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic

  13. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  14. On analysis of electroencephalogram by multiresolution-based energetic approach

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  15. Multiresolution multiscale active mask segmentation of fluorescence microscope images

    NASA Astrophysics Data System (ADS)

    Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2009-08-01

    We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.

  16. The multi-resolution capability of Tchebichef moments and its applications to the analysis of fluorescence excitation-emission spectra

    NASA Astrophysics Data System (ADS)

    Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin

    2018-01-01

    Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.

  17. 4D-CT Lung registration using anatomy-based multi-level multi-resolution optical flow analysis and thin-plate splines.

    PubMed

    Min, Yugang; Neylon, John; Shah, Amish; Meeks, Sanford; Lee, Percy; Kupelian, Patrick; Santhanam, Anand P

    2014-09-01

    The accuracy of 4D-CT registration is limited by inconsistent Hounsfield unit (HU) values in the 4D-CT data from one respiratory phase to another and lower image contrast for lung substructures. This paper presents an optical flow and thin-plate spline (TPS)-based 4D-CT registration method to account for these limitations. The use of unified HU values on multiple anatomy levels (e.g., the lung contour, blood vessels, and parenchyma) accounts for registration errors by inconsistent landmark HU value. While 3D multi-resolution optical flow analysis registers each anatomical level, TPS is employed for propagating the results from one anatomical level to another ultimately leading to the 4D-CT registration. 4D-CT registration was validated using target registration error (TRE), inverse consistency error (ICE) metrics, and a statistical image comparison using Gamma criteria of 1 % intensity difference in 2 mm(3) window range. Validation results showed that the proposed method was able to register CT lung datasets with TRE and ICE values <3 mm. In addition, the average number of voxel that failed the Gamma criteria was <3 %, which supports the clinical applicability of the propose registration mechanism. The proposed 4D-CT registration computes the volumetric lung deformations within clinically viable accuracy.

  18. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  19. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80o and energies in excess of 4 EeV (4 × 1018 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10-5 in the case of the angular power spectrum, and 2.5 × 10-3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.

  20. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    PubMed

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  2. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  3. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  4. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  5. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Andringa, S.

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80{sup o} and energies in excess of 4 EeV (4 × 10{sup 18} eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providingmore » directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p -values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10{sup −5} in the case of the angular power spectrum, and 2.5 × 10{sup −3} in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.« less

  6. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Aglietta, M.

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80(o) and energies in excess of 4 EeV (4 × 10 18 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information onmore » any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured, while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10 -5 in the case of the angular power spectrum, and 2.5 × 10 -3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.« less

  7. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  8. Multi-Resolution Analysis of LiDAR data for Characterizing a Stabilized Aeolian Landscape in South Texas

    NASA Astrophysics Data System (ADS)

    Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.

    2014-12-01

    Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.

  9. Cointegration and Nonstationarity in the Context of Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Worden, K.; Cross, E. J.; Kyprianou, A.

    2011-07-01

    Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.

  10. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  11. A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field

    NASA Astrophysics Data System (ADS)

    Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.

    2016-10-01

    We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.

  12. A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan

    2014-03-01

    Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.

  13. Adaptation of a multi-resolution adversarial model for asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Rosenberg, Brad; Gonsalves, Paul G.

    2006-05-01

    Recent military operations have demonstrated the use by adversaries of non-traditional or asymmetric military tactics to offset US military might. Rogue nations with links to trans-national terrorists have created a highly unpredictable and potential dangerous environment for US military operations. Several characteristics of these threats include extremism in beliefs, global in nature, non-state oriented, and highly networked and adaptive, thus making these adversaries less vulnerable to conventional military approaches. Additionally, US forces must also contend with more traditional state-based threats that are further evolving their military fighting strategies and capabilities. What are needed are solutions to assist our forces in the prosecution of operations against these diverse threat types and their atypical strategies and tactics. To address this issue, we present a system that allows for the adaptation of a multi-resolution adversarial model. The developed model can then be used to support both training and simulation based acquisition requirements to effectively respond to such an adversary. The described system produces a combined adversarial model by merging behavior modeling at the individual level with aspects at the group and organizational level via network analysis. Adaptation of this adversarial model is performed by means of an evolutionary algorithm to build a suitable model for the chosen adversary.

  14. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  15. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  16. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    PubMed

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  17. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  18. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  19. Adjusting Wavelet-based Multiresolution Analysis Boundary Conditions for Robust Long-term Streamflow Forecasting Model

    NASA Astrophysics Data System (ADS)

    Maslova, I.; Ticlavilca, A. M.; McKee, M.

    2012-12-01

    There has been an increased interest in wavelet-based streamflow forecasting models in recent years. Often overlooked in this approach are the circularity assumptions of the wavelet transform. We propose a novel technique for minimizing the wavelet decomposition boundary condition effect to produce long-term, up to 12 months ahead, forecasts of streamflow. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data. A hybrid wavelet-multivariate relevance vector machine model is developed for forecasting the streamflow in real-time for Yellowstone River, Uinta Basin, Utah, USA. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model model accuracy can be increased by using the wavelet boundary rule introduced in this study. This long-term streamflow modeling and forecasting methodology would enable better decision-making and managing water availability risk.

  20. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  1. A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less

  2. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  3. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    NASA Astrophysics Data System (ADS)

    Yang, Qingcheng; To, Albert C.

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.

  4. Automatic detection of slight parameter changes associated to complex biomedical signals using multiresolution q-entropy1.

    PubMed

    Torres, M E; Añino, M M; Schlotthauer, G

    2003-12-01

    It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.

  5. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  6. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Mather, Barry A

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less

  7. Multiresolution analysis of characteristic length scales with high-resolution topographic data

    NASA Astrophysics Data System (ADS)

    Sangireddy, Harish; Stark, Colin P.; Passalacqua, Paola

    2017-07-01

    Characteristic length scales (CLS) define landscape structure and delimit geomorphic processes. Here we use multiresolution analysis (MRA) to estimate such scales from high-resolution topographic data. MRA employs progressive terrain defocusing, via convolution of the terrain data with Gaussian kernels of increasing standard deviation, and calculation at each smoothing resolution of (i) the probability distributions of curvature and topographic index (defined as the ratio of slope to area in log scale) and (ii) characteristic spatial patterns of divergent and convergent topography identified by analyzing the curvature of the terrain. The MRA is first explored using synthetic 1-D and 2-D signals whose CLS are known. It is then validated against a set of MARSSIM (a landscape evolution model) steady state landscapes whose CLS were tuned by varying hillslope diffusivity and simulated noise amplitude. The known CLS match the scales at which the distributions of topographic index and curvature show scaling breaks, indicating that the MRA can identify CLS in landscapes based on the scaling behavior of topographic attributes. Finally, the MRA is deployed to measure the CLS of five natural landscapes using meter resolution digital terrain model data. CLS are inferred from the scaling breaks of the topographic index and curvature distributions and equated with (i) small-scale roughness features and (ii) the hillslope length scale.

  8. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  9. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USDA-ARS?s Scientific Manuscript database

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay rather than the entire sample process. Our objective was to develop a method to determine the 95% LOD (lowest co...

  10. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Multi-resolution cell orientation congruence descriptors for epithelium segmentation in endometrial histology images.

    PubMed

    Li, Guannan; Raza, Shan E Ahmed; Rajpoot, Nasir M

    2017-04-01

    It has been recently shown that recurrent miscarriage can be caused by abnormally high ratio of number of uterine natural killer (UNK) cells to the number of stromal cells in human female uterus lining. Due to high workload, the counting of UNK and stromal cells needs to be automated using computer algorithms. However, stromal cells are very similar in appearance to epithelial cells which must be excluded in the counting process. To exclude the epithelial cells from the counting process it is necessary to identify epithelial regions. There are two types of epithelial layers that can be encountered in the endometrium: luminal epithelium and glandular epithelium. To the best of our knowledge, there is no existing method that addresses the segmentation of both types of epithelium simultaneously in endometrial histology images. In this paper, we propose a multi-resolution Cell Orientation Congruence (COCo) descriptor which exploits the fact that neighbouring epithelial cells exhibit similarity in terms of their orientations. Our experimental results show that the proposed descriptors yield accurate results in simultaneously segmenting both luminal and glandular epithelium. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It

  13. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It

  14. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  15. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  16. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    NASA Technical Reports Server (NTRS)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  17. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  18. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  19. Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging

    NASA Astrophysics Data System (ADS)

    Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni

    This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.

  20. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  1. Multiscale/multiresolution landslides susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru

    2014-05-01

    Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital

  2. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough

  3. On the use of adaptive multiresolution method with time-varying tolerance for compressible fluid flows

    NASA Astrophysics Data System (ADS)

    Soni, V.; Hadjadj, A.; Roussel, O.

    2017-12-01

    In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.

  4. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  5. Framework for multi-resolution analyses of advanced traffic management strategies.

    DOT National Transportation Integrated Search

    2016-11-01

    Demand forecasting models and simulation models have been developed, calibrated, and used in isolation of each other. However, the advancement of transportation system technologies and strategies, the increase in the availability of data, and the unc...

  6. A decadal observation of vegetation dynamics using multi-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chiang, Yang-Sheng; Chen, Kun-Shan; Chu, Chang-Jen

    2012-10-01

    Vegetation cover not just affects the habitability of the earth, but also provides potential terrestrial mechanism for mitigation of greenhouse gases. This study aims at quantifying such green resources by incorporating multi-resolution satellite images from different platforms, including Formosat-2(RSI), SPOT(HRV/HRG), and Terra(MODIS), to investigate vegetation fractional cover (VFC) and its inter-/intra-annual variation in Taiwan. Given different sensor capabilities in terms of their spatial coverage and resolution, infusion of NDVIs at different scales was used to determine fraction of vegetation cover based on NDVI. Field campaign has been constantly conducted on a monthly basis for 6 years to calibrate the critical NDVI threshold for the presence of vegetation cover, with test sites covering IPCC-defined land cover types of Taiwan. Based on the proposed method, we analyzed spatio- temporal changes of VFC for the entire Taiwan Island. A bimodal sequence of VFC was observed for intra-annual variation based on MODIS data, with level around 5% and two peaks in spring and autumn marking the principal dual-cropping agriculture pattern in southwestern Taiwan. Compared to anthropogenic-prone variation, the inter-annual VFC (Aug.-Oct.) derived from HRV/HRG/RSI reveals that the moderate variations (3%) and the oscillations were strongly linked with regional climate pattern and major disturbances resulting from extreme weather events. Two distinct cycles (2002-2005 and 2005-2009) were identified in the decadal observations, with VFC peaks at 87.60% and 88.12% in 2003 and 2006, respectively. This time-series mapping of VFC can be used to examine vegetation dynamics and its response associated with short-term and long-term anthropogenic/natural events.

  7. The Marine Geoscience Data System and the Global Multi-Resolution Topography Synthesis: Online Resources for Exploring Ocean Mapping Data

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.

    2016-02-01

    The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases

  8. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  9. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  10. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) using ICESat geodetic control

    USGS Publications Warehouse

    Carabajal, C.C.; Harding, D.J.; Boy, J.-P.; Danielson, Jeffrey J.; Gesch, D.B.; Suchdeo, V.P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (?? 86?? latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete ???50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) Using ICESat Geodetic Control

    NASA Technical Reports Server (NTRS)

    Carabajal, Claudia C.; Harding, David J.; Boy, Jean-Paul; Danielson, Jeffrey J.; Gesch, Dean B.; Suchdeo, Vijay P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (+/- 86deg latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete approx.50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m.

  12. Five Micron High Resolution MALDI Mass Spectrometry Imaging with Simple, Interchangeable, Multi-Resolution Optical System

    DOE PAGES

    Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin

    2017-01-03

    High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. Here in this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from ~9 μm practical laser spot size to a practical laser spot size of ~4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging ofmore » the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between ~4, ~7, and ~45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. Lastly, we also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.« less

  13. Five Micron High Resolution MALDI Mass Spectrometry Imaging with Simple, Interchangeable, Multi-Resolution Optical System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin

    High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. Here in this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from ~9 μm practical laser spot size to a practical laser spot size of ~4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging ofmore » the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between ~4, ~7, and ~45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. Lastly, we also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.« less

  14. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification

    NASA Astrophysics Data System (ADS)

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim

    2018-02-01

    In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.

  15. Multi-resolutional shape features via non-Euclidean wavelets: Applications to statistical analysis of cortical thickness

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.

    2014-01-01

    Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060

  16. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter ( SP ), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  17. Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.

    PubMed

    Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein

    2017-12-13

    As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms

  18. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static

  19. Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models

    NASA Astrophysics Data System (ADS)

    McEver, Jimmie; Davis, Paul K.; Bigelow, James H.

    2000-06-01

    We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.

  20. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR.

    PubMed

    Stokdyk, Joel P; Firnstahl, Aaron D; Spencer, Susan K; Burch, Tucker R; Borchardt, Mark A

    2016-06-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L(-1) assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation. Published by Elsevier Ltd.

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  2. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  3. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  4. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    NASA Astrophysics Data System (ADS)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  5. Framework for multi-resolution analyses of advanced traffic management strategies [summary].

    DOT National Transportation Integrated Search

    2017-01-01

    Transportation planning relies extensively on software that can simulate and predict travel behavior in response to alternative transportation networks. However, different software packages view traffic at different scales. Some programs are based on...

  6. Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0

    NASA Astrophysics Data System (ADS)

    Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.

    2010-12-01

    The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA

  7. Application of wavelet multi-resolution analysis for correction of seismic acceleration records

    NASA Astrophysics Data System (ADS)

    Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi

    2007-12-01

    During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.

  8. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    PubMed

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  9. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  10. Old document image segmentation using the autocorrelation function and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy

    2013-01-01

    Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to

  11. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  12. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  13. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas

    2017-03-01

    The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken

  14. A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.

    2004-12-01

    The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio

  15. Protons are one of the limiting factors in determining sensitivity of nano surface-assisted (+)-mode LDI MS analyses.

    PubMed

    Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan

    2013-10-01

    A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.

  16. SeeSway - A free web-based system for analysing and exploring standing balance data.

    PubMed

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate

  17. Estimation of white matter fiber parameters from compressed multiresolution diffusion MRI using sparse Bayesian learning.

    PubMed

    Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe

    2018-02-15

    We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Shoreline change after 12 years of tsunami in Banda Aceh, Indonesia: a multi-resolution, multi-temporal satellite data and GIS approach

    NASA Astrophysics Data System (ADS)

    Sugianto, S.; Heriansyah; Darusman; Rusdi, M.; Karim, A.

    2018-04-01

    The Indian Ocean Tsunami event on the 26 December 2004 has caused severe damage of some shorelines in Banda Aceh City, Indonesia. Tracing back the impact can be seen using remote sensing data combined with GIS. The approach is incorporated with image processing to analyze the extent of shoreline changes with multi-temporal data after 12 years of tsunami. This study demonstrates multi-resolution and multi-temporal satellite images of QuickBird and IKONOS to demarcate the shoreline of Banda Aceh shoreline from before and after tsunami. The research has demonstrated a significant change to the shoreline in the form of abrasion between 2004 and 2005 from few meters to hundred meters’ change. The change between 2004 and 2011 has not returned to the previous stage of shoreline before the tsunami, considered post tsunami impact. The abrasion occurs between 18.3 to 194.93 meters. Further, the change in 2009-2011 shows slowly change of shoreline of Banda Aceh, considered without impact of tsunami e.g. abrasion caused by ocean waves that erode the coast and on specific areas accretion occurs caused by sediment carried by the river flow into the sea near the shoreline of the study area.

  19. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  20. Geomorphometric multi-scale analysis for the recognition of Moon surface features using multi-resolution DTMs

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Jianping; Sofia, Giulia; Tarolli, Paolo

    2014-05-01

    Moon surface features have great significance in understanding and reconstructing the lunar geological evolution. Linear structures like rilles and ridges are closely related to the internal forced tectonic movement. The craters widely distributed on the moon are also the key research targets for external forced geological evolution. The extremely rare availability of samples and the difficulty for field works make remote sensing the most important approach for planetary studies. New and advanced lunar probes launched by China, U.S., Japan and India provide nowadays a lot of high-quality data, especially in the form of high-resolution Digital Terrain Models (DTMs), bringing new opportunities and challenges for feature extraction on the moon. The aim of this study is to recognize and extract lunar features using geomorphometric analysis based on multi-scale parameters and multi-resolution DTMs. The considered digital datasets include CE1-LAM (Chang'E One, Laser AltiMeter) data with resolution of 500m/pix, LRO-WAC (Lunar Reconnaissance Orbiter, Wide Angle Camera) data with resolution of 100m/pix, LRO-LOLA (Lunar Reconnaissance Orbiter, Lunar Orbiter Laser Altimeter) data with resolution of 60m/pix, and LRO-NAC (Lunar Reconnaissance Orbiter, Narrow Angle Camera) data with resolution of 2-5m/pix. We considered surface derivatives to recognize the linear structures including Rilles and Ridges. Different window scales and thresholds for are considered for feature extraction. We also calculated the roughness index to identify the erosion/deposits area within craters. The results underline the suitability of the adopted methods for feature recognition on the moon surface. The roughness index is found to be a useful tool to distinguish new craters, with higher roughness, from the old craters, which present a smooth and less rough surface.

  1. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    USGS Publications Warehouse

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  2. Classification of glioblastoma and metastasis for neuropathology intraoperative diagnosis: a multi-resolution textural approach to model the background

    NASA Astrophysics Data System (ADS)

    Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.

    2014-03-01

    Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.

  3. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  4. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously. © The Author(s) 2015.

  5. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  6. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A totalmore » of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet

  7. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction.

    PubMed

    Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Karnabatidis, Dimitrios; Hazle, John D; Kagadis, George C

    2014-07-01

    Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists' qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. A new wavelet-based EFCM clustering model was introduced toward

  8. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  9. Multiresolution imaging of mantle reflectivity structure using SS and P'P' precursors

    NASA Astrophysics Data System (ADS)

    Schultz, Ryan; Gu, Yu J.

    2013-10-01

    Knowledge of the mantle reflectivity structure is highly dependent on our ability to efficiently extract, and properly interpret, small seismic arrivals. Among the various data types and techniques, long-period SS/PP precursors and high-frequency receiver functions are routinely utilized to increase the confidence of the recovered mantle stratifications at distinct spatial scales. However, low resolution and a complex Fresnel zone are glaring weaknesses of SS precursors, while over-reliance on receiver distribution is a formidable challenge for the analysis of converted waves from oceanic regions. A promising high frequency alternative to receiver functions is P'P' precursors, which are capable of resolving mantle structures at vertical and lateral resolution of ˜5 and ˜200 km, respectively, owing to their spectral content, shallow angle of incidence and near-symmetric Fresnel zones. This study presents a novel processing method for both SS (or PP) and P'P' precursors based on deconvolution, stacking, Radon transform and depth migration. A suite of synthetic tests is performed to quantify the fidelity and stability of this method under different data conditions. Our multiresolution survey of the mantle at targeted areas near Nazca-South America subduction zone reveal both olivine and garnet related transitions at depths below 400 km. We attribute a depressed 660 to thermal variations, whereas compositional variations atop the upper-mantle transition zone are needed to explain the diminished or highly complex reflected/scattered signals from the 410 km discontinuity. We also observe prominent P'P' reflections within the transition zone, and the anomalous amplitudes near the plate boundary zone indicate a sharp (˜10 km thick) transition that likely resonates with the frequency content of P'P' precursors. The migration of SS precursors in this study shows no evidence of split 660 reflections, but potential majorite-ilmenite (590-640 km) and ilmenite

  10. A new, multi-resolution bedrock elevation map of the Greenland ice sheet

    NASA Astrophysics Data System (ADS)

    Griggs, J. A.; Bamber, J. L.; Grisbed Consortium

    2010-12-01

    Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F

  11. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

    NASA Astrophysics Data System (ADS)

    Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

    2016-05-01

    Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

  12. Multiscale geometric modeling of macromolecules II: Lagrangian representation

    PubMed Central

    Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599

  13. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth

    PubMed Central

    Folsom, James Patrick

    2015-01-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35  % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70  % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4  % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  14. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    PubMed

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  15. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  16. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    PubMed

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on

  17. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  18. Using multi-resolution proxies to assess ENSO impacts on the mean state of the tropical Pacific.

    NASA Astrophysics Data System (ADS)

    Karamperidou, C.; Conroy, J. L.

    2016-12-01

    guided by the fundamental and open question of multi-scale interactions in the tropical Pacific, and illustrates the need for multi-resolution paleoclimate proxies and their potential uses.

  19. Hidden Costs: the ethics of cost-effectiveness analyses for health interventions in resource-limited settings

    PubMed Central

    Rutstein, Sarah E.; Price, Joan T.; Rosenberg, Nora E.; Rennie, Stuart M.; Biddle, Andrea K.; Miller, William C.

    2017-01-01

    Cost-effectiveness analysis (CEA) is an increasingly appealing tool for evaluating and comparing health-related interventions in resource-limited settings. The goal is to inform decision-makers regarding the health benefits and associated costs of alternative interventions, helping guide allocation of limited resources by prioritizing interventions that offer the most health for the least money. Although only one component of a more complex decision-making process, CEAs influence the distribution of healthcare resources, directly influencing morbidity and mortality for the world’s most vulnerable populations. However, CEA-associated measures are frequently setting-specific valuations, and CEA outcomes may violate ethical principles of equity and distributive justice. We examine the assumptions and analytical tools used in CEAs that may conflict with societal values. We then evaluate contextual features unique to resource-limited settings, including the source of health-state utilities and disability weights; implications of CEA thresholds in light of economic uncertainty; and the role of external donors. Finally, we explore opportunities to help align interpretation of CEA outcomes with values and budgetary constraints in resource-limited settings. The ethical implications of CEAs in resource-limited settings are vast. It is imperative that CEA outcome summary measures and implementation thresholds adequately reflect societal values and ethical priorities in resource-limited settings. PMID:27141969

  20. Hidden costs: The ethics of cost-effectiveness analyses for health interventions in resource-limited settings.

    PubMed

    Rutstein, Sarah E; Price, Joan T; Rosenberg, Nora E; Rennie, Stuart M; Biddle, Andrea K; Miller, William C

    2017-10-01

    Cost-effectiveness analysis (CEA) is an increasingly appealing tool for evaluating and comparing health-related interventions in resource-limited settings. The goal is to inform decision-makers regarding the health benefits and associated costs of alternative interventions, helping guide allocation of limited resources by prioritising interventions that offer the most health for the least money. Although only one component of a more complex decision-making process, CEAs influence the distribution of health-care resources, directly influencing morbidity and mortality for the world's most vulnerable populations. However, CEA-associated measures are frequently setting-specific valuations, and CEA outcomes may violate ethical principles of equity and distributive justice. We examine the assumptions and analytical tools used in CEAs that may conflict with societal values. We then evaluate contextual features unique to resource-limited settings, including the source of health-state utilities and disability weights, implications of CEA thresholds in light of economic uncertainty, and the role of external donors. Finally, we explore opportunities to help align interpretation of CEA outcomes with values and budgetary constraints in resource-limited settings. The ethical implications of CEAs in resource-limited settings are vast. It is imperative that CEA outcome summary measures and implementation thresholds adequately reflect societal values and ethical priorities in resource-limited settings.

  1. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    PubMed

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  2. Mapping and Characterizing Selected Canopy Tree Species at the Angkor World Heritage Site in Cambodia Using Aerial Data

    PubMed Central

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  3. Multi-Resolution Playback of Network Trace Files

    DTIC Science & Technology

    2015-06-01

    a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3

  4. Measuring the Performance and Intelligence of Systems: Proceedings of the 2001 PerMIS Workshop

    DTIC Science & Technology

    2001-09-04

    35 1.1 Interval Mathematics for Analysis of Multiresolutional Systems V. Kreinovich, Univ. of Texas, R. Alo, Univ. of Houston-Downtown...the possible combinations. In non-deterministic real- time systems , the problem is compounded by the uncertainty in the execution times of various...multiresolutional, multiscale ) in their essence because of multiresolutional character of the meaning of words [Rieger, 01]. In integrating systems , the presence of a

  5. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    PubMed Central

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  6. NEXT Ion Thruster Performance Dispersion Analyses

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The NEXT ion thruster is a low specific mass, high performance thruster with a nominal throttling range of 0.5 to 7 kW. Numerous engineering model and one prototype model thrusters have been manufactured and tested. Of significant importance to propulsion system performance is thruster-to-thruster performance dispersions. This type of information can provide a bandwidth of expected performance variations both on a thruster and a component level. Knowledge of these dispersions can be used to more conservatively predict thruster service life capability and thruster performance for mission planning, facilitate future thruster performance comparisons, and verify power processor capabilities are compatible with the thruster design. This study compiles the test results of five engineering model thrusters and one flight-like thruster to determine unit-to-unit dispersions in thruster performance. Component level performance dispersion analyses will include discharge chamber voltages, currents, and losses; accelerator currents, electron backstreaming limits, and perveance limits; and neutralizer keeper and coupling voltages and the spot-to-plume mode transition flow rates. Thruster level performance dispersion analyses will include thrust efficiency.

  7. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients.

    PubMed

    Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong

    2013-01-07

    Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.

  8. ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS

    PubMed Central

    Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason

    2009-01-01

    Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537

  9. Microstructures, Forming Limit and Failure Analyses of Inconel 718 Sheets for Fabrication of Aerospace Components

    NASA Astrophysics Data System (ADS)

    Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra

    2017-04-01

    Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.

  10. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients

    PubMed Central

    Jin, Shuo; Li, Dengwang; Yin, Yong

    2013-01-01

    Accurate registration of  18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from  18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381

  11. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    NASA Astrophysics Data System (ADS)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  12. Using Meta-analyses for Comparative Effectiveness Research

    PubMed Central

    Ruppar, Todd M.; Phillips, Lorraine J.; Chase, Jo-Ana D.

    2012-01-01

    Comparative effectiveness research seeks to identify the most effective interventions for particular patient populations. Meta-analysis is an especially valuable form of comparative effectiveness research because it emphasizes the magnitude of intervention effects rather than relying on tests of statistical significance among primary studies. Overall effects can be calculated for diverse clinical and patient-centered variables to determine the outcome patterns. Moderator analyses compare intervention characteristics among primary studies by determining if effect sizes vary among studies with different intervention characteristics. Intervention effectiveness can be linked to patient characteristics to provide evidence for patient-centered care. Moderator analyses often answer questions never posed by primary studies because neither multiple intervention characteristics nor populations are compared in single primary studies. Thus meta-analyses provide unique contributions to knowledge. Although meta-analysis is a powerful comparative effectiveness strategy, methodological challenges and limitations in primary research must be acknowledged to interpret findings. PMID:22789450

  13. Scalar limitations of diffractive optical elements

    NASA Technical Reports Server (NTRS)

    Johnson, Eric G.; Hochmuth, Diane; Moharam, M. G.; Pommet, Drew

    1993-01-01

    In this paper, scalar limitations of diffractive optic components are investigated using coupled wave analyses. Results are presented for linear phase gratings and fanout devices. In addition, a parametric curve is given which correlates feature size with scalar performance.

  14. Amino acid analyses of Apollo 14 samples.

    NASA Technical Reports Server (NTRS)

    Gehrke, C. W.; Zumwalt, R. W.; Kuo, K.; Aue, W. A.; Stalling, D. L.; Kvenvolden, K. A.; Ponnamperuma, C.

    1972-01-01

    Detection limits were between 300 pg and 1 ng for different amino acids, in an analysis by gas-liquid chromatography of water extracts from Apollo 14 lunar fines in which amino acids were converted to their N-trifluoro-acetyl-n-butyl esters. Initial analyses of water and HCl extracts of sample 14240 and 14298 samples showed no amino acids above background levels.

  15. Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses

    USGS Publications Warehouse

    Jarvi, Susan I.; Bianchi, Kiara R.

    2006-01-01

    affected by the mutation rate at microsatellite loci, thus introducing a bias. Also, the number of loci that can be studied is frequently limited to fewer than 10. This theoretically represents a maximum of one marker for each of 10 chromosomes. Dominant markers like AFLP allow a larger fraction of the genome to be screened. Large numbers of loci can be screened by AFLP to resolve very small individual differences that can be used for identification of individuals, estimates of pairwise relatedness and, in some cases, for parentage analyses. Since AFLP is a dominant marker (can not distinguish between +/+ homozygote versus +/- heterozygote), it has limitations for parentage analyses. Only when both parents are homozygous for the absence of alleles (-/-) and offspring show a presence (+/+ or +/-) can the parents be excluded. In this case, microsatellites become preferable as they have the potential to exclude individual parents when the other parent is unknown. Another limitation of AFLP is that the loci are generally less polymorphic (only two alleles/locus) than microsatellite loci (often >10 alleles/locus). While generally fewer than 10 highly polymorphic microsatellite loci are enough to exclude and assign parentage, it might require up to 100 or more AFLP loci. While there are pros and cons to different methodologies, the total number of loci evaluated by AFLP generally offsets the limitations imposed due to the dominant nature of this approach and end results between methods are generally comparable. Overall objectives of this study were to evaluate the level of genetic diversity in the captive population of Alala, to compare genetic data with currently available pedigree information, and to determine the extent of relatedness of mating pairs and among founding individuals.

  16. Multi-Resolution Imaging of Electron Dynamics in Nanostructure Interfaces

    DTIC Science & Technology

    2010-07-27

    metallic carbon nanotubes from semiconducting ones. In pentacene transistors, we used scanning photocurrent microscopy to study spatially resolved...photoelectric response of pentacene thin films, which showed that point contacts formed near the hole injection points limit the overall performance of the...photothermal current microscopy, carbon nanotube transistor, pentacene transistor, contact resistance, hole injection 16. SECURITY CLASSIFICATION OF

  17. Limitations of analyses based on achieved blood pressure: lessons from the African American study of kidney disease and hypertension trial.

    PubMed

    Davis, Esa M; Appel, Lawrence J; Wang, Xuelei; Greene, Tom; Astor, Brad C; Rahman, Mahboob; Toto, Robert; Lipkowitz, Michael S; Pogue, Velvie A; Wright, Jackson T

    2011-06-01

    Blood pressure (BP) guidelines that set target BP levels often rely on analyses of achieved BP from hypertension treatment trials. The objective of this article was to compare the results of analyses of achieved BP to intention-to-treat analyses on renal disease progression. Participants (n=1094) in the African-American Study of Kidney Disease and Hypertension Trial were randomly assigned to either usual BP goal defined by a mean arterial pressure goal of 102 to 107 mm Hg or lower BP goal defined by a mean arterial pressure goal of ≤92 mm Hg. Median follow-up was 3.7 years. Primary outcomes were rate of decline in measured glomerular filtration rate and a composite of a decrease in glomerular filtration rate by >50% or >25 mL/min per 1.73 m(2), requirement for dialysis, transplantation, or death. Intention-to-treat analyses showed no evidence of a BP effect on either the rate of decline in glomerular filtration rate or the clinical composite outcome. In contrast, the achieved BP analyses showed that each 10-mm Hg increment in mean follow-up achieved mean arterial pressure was associated with a 0.35 mL/min per 1.73 m(2) (95% CI: 0.08 to 0.62 mL/min per 1.73 m(2); P=0.01) faster mean glomerular filtration rate decline and a 17% (95% CI: 5% to 32%; P=0.006) increased risk of the clinical composite outcome. Analyses based on achieved BP lead to markedly different inferences than traditional intention-to-treat analyses, attributed in part to confounding of achieved BP with comorbidities, disease severity, and adherence. Clinicians and policy makers should exercise caution when making treatment recommendations based on analyses relating outcomes to achieved BP.

  18. A new multiresolution method applied to the 3D reconstruction of small bodies

    NASA Astrophysics Data System (ADS)

    Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.

    2012-12-01

    The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape

  19. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE PAGES

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...

    2017-06-21

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  20. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  1. Limits in the evolution of biological form: a theoretical morphologic perspective.

    PubMed

    McGhee, George R

    2015-12-06

    Limits in the evolution of biological form can be empirically demonstrated by using theoretical morphospace analyses, and actual analytic examples are given for univalved ammonoid shell form, bivalved brachiopod shell form and helical bryozoan colony form. Limits in the evolution of form in these animal groups can be shown to be due to functional and developmental constraints on possible evolutionary trajectories in morphospace. Future evolutionary-limit research is needed to analyse the possible existence of temporal constraint in the evolution of biological form on Earth, and in the search for the possible existence of functional alien life forms on Titan and Triton that are developmentally impossible for Earth life.

  2. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  3. related: an R package for analysing pairwise relatedness from codominant molecular markers.

    PubMed

    Pew, Jack; Muir, Paul H; Wang, Jinliang; Frasier, Timothy R

    2015-05-01

    Analyses of pairwise relatedness represent a key component to addressing many topics in biology. However, such analyses have been limited because most available programs provide a means to estimate relatedness based on only a single estimator, making comparison across estimators difficult. Second, all programs to date have been platform specific, working only on a specific operating system. This has the undesirable outcome of making choice of relatedness estimator limited by operating system preference, rather than being based on scientific rationale. Here, we present a new R package, called related, that can calculate relatedness based on seven estimators, can account for genotyping errors, missing data and inbreeding, and can estimate 95% confidence intervals. Moreover, simulation functions are provided that allow for easy comparison of the performance of different estimators and for analyses of how much resolution to expect from a given data set. Because this package works in R, it is platform independent. Combined, this functionality should allow for more appropriate analyses and interpretation of pairwise relatedness and will also allow for the integration of relatedness data into larger R workflows. © 2014 John Wiley & Sons Ltd.

  4. Multi-country health surveys: are the analyses misleading?

    PubMed

    Masood, Mohd; Reidpath, Daniel D

    2014-05-01

    The aim of this paper was to review the types of approaches currently utilized in the analysis of multi-country survey data, specifically focusing on design and modeling issues with a focus on analyses of significant multi-country surveys published in 2010. A systematic search strategy was used to identify the 10 multi-country surveys and the articles published from them in 2010. The surveys were selected to reflect diverse topics and foci; and provide an insight into analytic approaches across research themes. The search identified 159 articles appropriate for full text review and data extraction. The analyses adopted in the multi-country surveys can be broadly classified as: univariate/bivariate analyses, and multivariate/multivariable analyses. Multivariate/multivariable analyses may be further divided into design- and model-based analyses. Of the 159 articles reviewed, 129 articles used model-based analysis, 30 articles used design-based analyses. Similar patterns could be seen in all the individual surveys. While there is general agreement among survey statisticians that complex surveys are most appropriately analyzed using design-based analyses, most researchers continued to use the more common model-based approaches. Recent developments in design-based multi-level analysis may be one approach to include all the survey design characteristics. This is a relatively new area, however, and there remains statistical, as well as applied analytic research required. An important limitation of this study relates to the selection of the surveys used and the choice of year for the analysis, i.e., year 2010 only. There is, however, no strong reason to believe that analytic strategies have changed radically in the past few years, and 2010 provides a credible snapshot of current practice.

  5. Dose-related beneficial and harmful effects of gabapentin in postoperative pain management – post hoc analyses from a systematic review with meta-analyses and trial sequential analyses

    PubMed Central

    Fabritius, Maria Louise; Wetterslev, Jørn; Mathiesen, Ole; Dahl, Jørgen B

    2017-01-01

    Background During the last 15 years, gabapentin has become an established component of postoperative pain treatment. Gabapentin has been employed in a wide range of doses, but little is known about the optimal dose, providing the best balance between benefit and harm. This systematic review with meta-analyses aimed to explore the beneficial and harmful effects of various doses of gabapentin administered to surgical patients. Materials and methods Data in this paper were derived from an original review, and the subgroup analyses were predefined in an International Prospective Register of Systematic Reviews published protocol: PROSPERO (ID: CRD42013006538). The methods followed Cochrane guidelines. The Cochrane Library’s CENTRAL, PubMed, EMBASE, Science Citation Index Expanded, Google Scholar, and FDA database were searched for relevant trials. Randomized clinical trials comparing gabapentin versus placebo were included. Four different dose intervals were investigated: 0–350, 351–700, 701–1050, and >1050 mg. Primary co-outcomes were 24-hour morphine consumption and serious adverse events (SAEs), with emphasis put on trials with low risk of bias. Results One hundred and twenty-two randomized clinical trials, with 8466 patients, were included. Sixteen were overall low risk of bias. No consistent increase in morphine-sparing effect was observed with increasing doses of gabapentin from the trials with low risk of bias. Analyzing all trials, the smallest and the highest dose subgroups demonstrated numerically the most prominent reduction in morphine consumption. Twenty-seven trials reported 72 SAEs, of which 83% were reported in the >1050 mg subgroup. No systematic increase in SAEs was observed with increasing doses of gabapentin. Conclusion Data were sparse, and the small number of trials with low risk of bias is a major limitation for firm conclusions. Taking these limitations into account, we were not able to demonstrate a clear relationship between the dosage

  6. Multiresolution modeling with a JMASS-JWARS HLA Federation

    NASA Astrophysics Data System (ADS)

    Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher

    2002-07-01

    CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.

  7. [Functional limitations associated with lumbosacral spine pain in pregnant women].

    PubMed

    Brylewska-Pinda, Magdalena; Kemicer-Chmielewska, Ewa; Pierzak-Sominka, Joanna; Mosiejczuk, Hanna

    Lower back pain affects most pregnant women. Pain is often associated with varying degrees of functional limitations, causing a problem for pregnant women in the performance of many everyday activities. The aim of the study was to assess the extent to which lumbosacral spine pain caused limitations in the daily functioning of pregnant women, and the relationship between reported restrictions and analysed variables. The study was conducted in the city of Szczecin in Poland among 81 pregnant women. Data were collected using a standardized Oswestry questionnaire survey (The Oswestry Lower Back Pain Disability Questionnaire). Results were analysed using the χ² test of independence. The signiicance level was adopted at p < 0.05. The majority of women pregnant for the second time (n = 38) had mild disability. The relationship between the degree of disability and the order of pregnancies was statistically signi icant (χ² = 40.457, p = 0.0000000085). The majority of pregnant women had minor functional limitations due to pain in the lumbosacral spine region. The degree of functional limitations depends on the trimester of pregnancy and the order of pregnancies.

  8. Phospholipid and Respiratory Quinone Analyses From Extreme Environments

    NASA Astrophysics Data System (ADS)

    Pfiffner, S. M.

    2008-12-01

    Extreme environments on Earth have been chosen as surrogate sites to test methods and strategies for the deployment of space craft in the search for extraterrestrial life. Surrogate sites for many of the NASA astrobiology institutes include the South African gold mines, Canadian subpermafrost, Atacama Desert, and acid rock drainage. Soils, sediments, rock cores, fracture waters, biofilms, and service and drill waters represent the types of samples collected from these sites. These samples were analyzed by gas chromatography mass spectrometry for phospholipid fatty acid methyl esters and by high performance liquid chromatography atmospheric pressure chemical ionization tandem mass spectrometry for respiratory quinones. Phospholipid analyses provided estimates of biomass, community composition, and compositional changes related to nutritional limitations or exposure to toxic conditions. Similar to phospholipid analyses, respiratory quinone analyses afforded identification of certain types of microorganisms in the community based on respiration and offered clues to in situ redox conditions. Depending on the number of samples analyzed, selected multivariate statistical methods were applied to relate membrane lipid results with site biogeochemical parameters. Successful detection of life signatures and refinement of methodologies at surrogate sites on Earth will be critical for the recognition of extraterrestrial life. At this time, membrane lipid analyses provide useful information not easily obtained by other molecular techniques.

  9. The Partition of Multi-Resolution LOD Based on Qtm

    NASA Astrophysics Data System (ADS)

    Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.

    2011-08-01

    The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  10. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  11. A multiresolution inversion for imaging the ionosphere

    NASA Astrophysics Data System (ADS)

    Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo

    2017-06-01

    Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.

  12. Nutrient Limitation Dynamics of a Coastal Cape Cod Pond: Seasonal Trends in Alkaline Phosphatase Activity

    DTIC Science & Technology

    2000-11-13

    Collection and Nutrient Analyses Ashumet Pond water column profiles and samples were taken by the School for Marine Science and Technology (SMAST) at the...Collection & Analysis ........................................ .......... 77 4.3.1 SMAST Water Sampling Plan/Collection and Nutrient Analyses...suited as an indicator of phosphate limitation in natural waters . In this study alkaline phosphatase is used to understand the nutrient limitation

  13. Con: Meta-analysis: some key limitations and potential solutions.

    PubMed

    Esterhuizen, Tonya M; Thabane, Lehana

    2016-06-01

    Meta-analysis, a statistical combination of results of several trials to produce a summary effect, has been subject to criticism in the past, mainly for the reasons of poor quality of included studies, heterogeneity between studies meta-analyzed and failing to address publication bias. These limitations can cause the results to be misleading, which is important if policy and practice decisions are based on systematic reviews and meta-analyses. We elaborate on these limitations and illustrate them with examples from the nephrology literature. Finally, we present some potential solutions, notably, education in meta-analysis for evidence producers and consumers as well as the use of individual patient data for meta-analyses. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  14. The MPLEx Protocol for Multi-omic Analyses of Soil Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.

    Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less

  15. Transportation systems analyses. Volume 2: Technical/programmatics

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This report documents the three principal transportation systems analyses (TSA) efforts during the period 7 November 92 - 6 May 93. The analyses are as follows: Mixed-Fleet (STS/ELV) strategies for SSF resupply; Transportation Systems Data Book - overview; and Operations Cost Model - overview/introduction.

  16. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small values cannot be precisely measured. These values are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD,...

  17. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  18. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  19. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  20. Patent information retrieval: approaching a method and analysing nanotechnology patent collaborations.

    PubMed

    Ozcan, Sercan; Islam, Nazrul

    2017-01-01

    Many challenges still remain in the processing of explicit technological knowledge documents such as patents. Given the limitations and drawbacks of the existing approaches, this research sets out to develop an improved method for searching patent databases and extracting patent information to increase the efficiency and reliability of nanotechnology patent information retrieval process and to empirically analyse patent collaboration. A tech-mining method was applied and the subsequent analysis was performed using Thomson data analyser software. The findings show that nations such as Korea and Japan are highly collaborative in sharing technological knowledge across academic and corporate organisations within their national boundaries, and China presents, in some cases, a great illustration of effective patent collaboration and co-inventorship. This study also analyses key patent strengths by country, organisation and technology.

  1. Random sampling of constrained phylogenies: conducting phylogenetic analyses when the phylogeny is partially known.

    PubMed

    Housworth, E A; Martins, E P

    2001-01-01

    Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.

  2. Analysing causal structures with entropy

    NASA Astrophysics Data System (ADS)

    Weilenmann, Mirjam; Colbeck, Roger

    2017-11-01

    A central question for causal inference is to decide whether a set of correlations fits a given causal structure. In general, this decision problem is computationally infeasible and hence several approaches have emerged that look for certificates of compatibility. Here, we review several such approaches based on entropy. We bring together the key aspects of these entropic techniques with unified terminology, filling several gaps and establishing new connections, all illustrated with examples. We consider cases where unobserved causes are classical, quantum and post-quantum, and discuss what entropic analyses tell us about the difference. This difference has applications to quantum cryptography, where it can be crucial to eliminate the possibility of classical causes. We discuss the achievements and limitations of the entropic approach in comparison to other techniques and point out the main open problems.

  3. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  4. IDEA: Interactive Display for Evolutionary Analyses.

    PubMed

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-12-08

    The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  5. IDEA: Interactive Display for Evolutionary Analyses

    PubMed Central

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-01-01

    Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data. PMID:19061522

  6. Metagenomic and Metatranscriptomic Analyses Reveal the Structure and Dynamics of a Dechlorinating Community Containing Dehalococcoides mccartyi and Corrinoid-Providing Microorganisms under Cobalamin-Limited Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Men, Yujie; Yu, Ke; Bælum, Jacob

    ABSTRACT The aim of this study is to obtain a systems-level understanding of the interactions betweenDehalococcoidesand corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in theVeillonellaceaebin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expression when cobalamin wasmore » not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoidde novobiosynthesis pathway was also assigned to theVeillonellaceaebin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway ofDehalococcoideswas upregulated in response to the cobalamin stress. This study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions. IMPORTANCEThe key chloroethene-dechlorinating bacteriumDehalococcoides mccartyiis a cobalamin auxotroph, thus acquiring corrinoids from other community members. Therefore, it is important to investigate the microbe-microbe interactions betweenDehalococcoidesand the corrinoid-providing microorganisms in a community. This study provides systems

  7. Integrated Speed Limiter and Fatigue Analyzer System

    NASA Astrophysics Data System (ADS)

    Pranoto, Hadi; Leman, A. M.; Wahab, Abdi; Sebayang, Darwin

    2018-03-01

    The traffic accident increase in line with the growth of the vehicle, so the safety system must be developed to decrease the accident. This paper will purpose the integrated between speed limiter and fatigue analyser to improve the safety for vehicle, and also to analyse if there is an accident. The device and the software or application are developed and then integrated into one system. The testing held to prove the integrated between device and the application, and it show the system can work well. The next improvement for this system can be developing the server to collect data from internet, so the driver and the vehicle owner can monitor the system by internet.

  8. A Solid-State Fault Current Limiting Device for VSC-HVDC Systems

    NASA Astrophysics Data System (ADS)

    Larruskain, D. Marene; Zamora, Inmaculada; Abarrategui, , Oihane; Iturregi, Araitz

    2013-08-01

    Faults in the DC circuit constitute one of the main limitations of voltage source converter VSC-HVDC systems, as the high fault currents can damage seriously the converters. In this article, a new design for a fault current limiter (FCL) is proposed, which is capable of limiting the fault current as well as interrupting it, isolating the DC grid. The operation of the proposed FCL is analysed and verified with the most usual faults that can occur in overhead lines.

  9. Analysing causal structures with entropy

    PubMed Central

    Weilenmann, Mirjam

    2017-01-01

    A central question for causal inference is to decide whether a set of correlations fits a given causal structure. In general, this decision problem is computationally infeasible and hence several approaches have emerged that look for certificates of compatibility. Here, we review several such approaches based on entropy. We bring together the key aspects of these entropic techniques with unified terminology, filling several gaps and establishing new connections, all illustrated with examples. We consider cases where unobserved causes are classical, quantum and post-quantum, and discuss what entropic analyses tell us about the difference. This difference has applications to quantum cryptography, where it can be crucial to eliminate the possibility of classical causes. We discuss the achievements and limitations of the entropic approach in comparison to other techniques and point out the main open problems. PMID:29225499

  10. Estimation of rail wear limits based on rail strength investigations

    DOT National Transportation Integrated Search

    1998-12-01

    This report describes analyses performed to estimate limits on rail wear based on strength investigations. Two different failure modes are considered in this report: (1) permanent plastic bending, and (2) rail fracture. Rail bending stresses are calc...

  11. Speed limiter integrated fatigue analyzer (SLIFA) for speed and fatigue control on diesel engine truck and bus

    NASA Astrophysics Data System (ADS)

    Wahyudi, Haris; Pranoto, Hadi; Leman, A. M.; Sebayang, Darwin; Baba, I.

    2017-09-01

    Every second, the number of road traffic deaths is increased globally with millions more sustaining severe injuries and living with long-term adverse health consequences. Jakarta alone in year 2015 had recorded 556 people died due to road accidents, approximately reached 6.231 road accident cases. The identified major contributory factors of such unfortunate events are both driver fatigue and over speeding habit especially related to the driving of truck and bus. This paper presents the idea on how to control the electronic system from input fuel system of injection pump and the combustion chamber engine will control the valve solenoid in injection pump which can lock and fuel will stop for moment, and speed limit can be success, by using sensor heart rate we can input reduce speed limit when fatigue detection driver. Integration process this tool can be relevant when Speed Limiter Integrated Fatigue Analyser (SLIFA) trial in the diesel engine for truck and bus, the result of this research Speed Limiter Integrated Fatigue Analyser (SLIFA) able to control speed of diesel engine for truck and bus almost 30km/h, 60km/h, and until 70 km/h. The installation of the sensor heart rate as the input speed limit SLIFA would work when the driver is detected to be in the fatigue condition. We make Speed Limiter Integrated Fatigue Analyser (SLIFA) for control and monitoring system for diesel engine in truck and bus. Speed Limiter Integrated Fatigue Analyser (SLIFA) system can save the historical of the speed record, fatigue, rpm, and body temperature of the driver.

  12. Relativistic corrections to fractal analyses of the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Célérier, M.-N.; Thieberger, R.

    2001-02-01

    The effect of curvature on the results of fractal analyses of the galaxy distribution is investigated. We show that, if the universe satisfies the criteria of a wide class of parabolic homogeneous models, the observers measuring the fractal index with the integrated conditional density procedure may use the Hubble formula, without having to allow for curvature, out to distances of 600 Mpc, and possibly far beyond. This contradicts a previous claim by Ribeiro (\\cite{r33}) that, in the Einstein-de Sitter case, relativistic corrections should be taken into account at much smaller scales. We state for the class of cosmological models under study, and give grounds for conjecture for others, that the averaging procedure has a smoothing effect and that, therefore, the redshift-distance relation provides an upper limit to the relativistic corrections involved in such analyses.

  13. Classification tree analyses reveal limited potential for early targeted prevention against childhood overweight.

    PubMed

    Beyerlein, Andreas; Kusian, Dennis; Ziegler, Anette-Gabriele; Schaffrath-Rosario, Angelika; von Kries, Rüdiger

    2014-02-01

    Whether specific combinations of risk factors in very early life might allow identification of high-risk target groups for overweight prevention programs was examined. Data of n = 8981 children from the German KiGGS study were analyzed. Using a classification tree approach, predictive risk factor combinations were assessed for overweight in 3-6, 7-10, and 11-17-year-old children. In preschool children, the subgroup with the highest overweight risk were migrant children with at least one obese parent, with a prevalence of 36.6 (95% confidence interval or CI: 22.9, 50.4)%, compared to an overall prevalence of 10.0 (8.9, 11.2)%. The prevalence of overweight increased from 18.3 (16.8, 19.8)% to 57.9 (46.6, 69.3)% in 7-10-year-old children, if at least one parent was obese and the child had been born large-for-gestational-age. In 11-17-year-olds, the overweight risk increased from 20.1 (18.9, 21.3)% to 63.0 (46.4, 79.7)% in the highest risk group. However, high prevalence ratios were found only in small subgroups, containing <10% of all overweight cases in the respective age group. Our results indicate only a limited potential for early targeted preventions against overweight in children and adolescents. Copyright © 2013 The Obesity Society.

  14. Post hoc analyses: after the facts.

    PubMed

    Srinivas, Titte R; Ho, Bing; Kang, Joseph; Kaplan, Bruce

    2015-01-01

    Prospective clinical trials are constructed with high levels of internal validity. Sample size and power considerations usually address primary endpoints. Primary endpoints have traditionally included events that are becoming increasingly less common and thus have led to growing use of composite endpoints and noninferiority trial designs in transplantation. This approach may mask real clinical benefit in one or the other domain with regard to either clinically relevant secondary endpoints or other unexpected findings. In addition, endpoints solely chosen based on power considerations are prone to misjudgment of actual treatment effect size as well as consistency of that effect. In the instances where treatment effects may have been underestimated, valuable information may be lost if buried within a composite endpoint. In all these cases, analyses and post hoc analyses of data become relevant in informing practitioners about clinical benefits or safety signals that may not be captured by the primary endpoint. On the other hand, there are many pitfalls in using post hoc determined endpoints. This short review is meant to allow readers to appreciate post hoc analysis not as an entity with a single approach, but rather as an analysis with unique limitations and strengths that often raise new questions to be addressed in further inquiries.

  15. Automatic brain tumor detection in MRI: methodology and statistical validation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Islam, Mohammad A.; Shaik, Jahangheer; Parra, Carlos; Ogg, Robert

    2005-04-01

    Automated brain tumor segmentation and detection are immensely important in medical diagnostics because it provides information associated to anatomical structures as well as potential abnormal tissue necessary to delineate appropriate surgical planning. In this work, we propose a novel automated brain tumor segmentation technique based on multiresolution texture information that combines fractal Brownian motion (fBm) and wavelet multiresolution analysis. Our wavelet-fractal technique combines the excellent multiresolution localization property of wavelets to texture extraction of fractal. We prove the efficacy of our technique by successfully segmenting pediatric brain MR images (MRIs) from St. Jude Children"s Research Hospital. We use self-organizing map (SOM) as our clustering tool wherein we exploit both pixel intensity and multiresolution texture features to obtain segmented tumor. Our test results show that our technique successfully segments abnormal brain tissues in a set of T1 images. In the next step, we design a classifier using Feed-Forward (FF) neural network to statistically validate the presence of tumor in MRI using both the multiresolution texture and the pixel intensity features. We estimate the corresponding receiver operating curve (ROC) based on the findings of true positive fractions and false positive fractions estimated from our classifier at different threshold values. An ROC, which can be considered as a gold standard to prove the competence of a classifier, is obtained to ascertain the sensitivity and specificity of our classifier. We observe that at threshold 0.4 we achieve true positive value of 1.0 (100%) sacrificing only 0.16 (16%) false positive value for the set of 50 T1 MRI analyzed in this experiment.

  16. Physiology limits commercially viable photoautotrophic production of microalgal biofuels.

    PubMed

    Kenny, Philip; Flynn, Kevin J

    2017-01-01

    Algal biofuels have been offered as an alternative to fossil fuels, based on claims that microalgae can provide a highly productive source of compounds as feedstocks for sustainable transport fuels. Life cycle analyses identify algal productivity as a critical factor affecting commercial and environmental viability. Here, we use mechanistic modelling of the biological processes driving microalgal growth to explore optimal production scenarios in an industrial setting, enabling us to quantify limits to algal biofuels potential. We demonstrate how physiological and operational trade-offs combine to restrict the potential for solar-powered algal-biodiesel production in open ponds to a ceiling of ca. 8000 L ha -1 year -1 . For industrial-scale operations, practical considerations limit production to ca. 6000 L ha -1 year -1 . According to published economic models and life cycle analyses, such production rates cannot support long-term viable commercialisation of solar-powered cultivation of natural microalgae strains exclusively as feedstock for biofuels. The commercial viability of microalgal biofuels depends critically upon limitations in microalgal physiology (primarily in rates of C-fixation); we discuss the scope for addressing this bottleneck concluding that even deployment of genetically modified microalgae with radically enhanced characteristics would leave a very significant logistical if not financial burden.

  17. Insights into Wilson's Warbler migration from analyses of hydrogen stable-isotope ratios

    Treesearch

    Jeffrey F. Kelly; Viorel Atudorei; Zachary D. Sharp; Deborah M. Finch

    2002-01-01

    Our ability to link the breeding locations of individual passerines to migration stopover sites and wintering locations is limited. Stable isotopes of hydrogen contained in bird feathers have recently shown potential in this regard. We measured hydrogen stable-isotope ratios (deltaD) of feathers from breeding, migrating, and wintering Wilson's Warblers. Analyses...

  18. The limits to tree height.

    PubMed

    Koch, George W; Sillett, Stephen C; Jennings, Gregory M; Davis, Stephen D

    2004-04-22

    Trees grow tall where resources are abundant, stresses are minor, and competition for light places a premium on height growth. The height to which trees can grow and the biophysical determinants of maximum height are poorly understood. Some models predict heights of up to 120 m in the absence of mechanical damage, but there are historical accounts of taller trees. Current hypotheses of height limitation focus on increasing water transport constraints in taller trees and the resulting reductions in leaf photosynthesis. We studied redwoods (Sequoia sempervirens), including the tallest known tree on Earth (112.7 m), in wet temperate forests of northern California. Our regression analyses of height gradients in leaf functional characteristics estimate a maximum tree height of 122-130 m barring mechanical damage, similar to the tallest recorded trees of the past. As trees grow taller, increasing leaf water stress due to gravity and path length resistance may ultimately limit leaf expansion and photosynthesis for further height growth, even with ample soil moisture.

  19. Descriptive and experimental analyses of variables maintaining self-injurious behavior.

    PubMed Central

    Lerman, D C; Iwata, B A

    1993-01-01

    Independent descriptive (correlational) and functional (experimental) analyses were conducted to determine the extent to which the two methods would yield data supporting similar conclusions about variables maintaining the self-injurious behavior (SIB) of 6 subjects. For the descriptive analyses, subjects were observed in their residences and at training sites at various times each day while observers recorded naturally occurring sequences of specified subject and staff behaviors. The subjects also participated in a day program for the assessment and treatment of SIB, in which they were exposed to functional analyses that manipulated potential maintaining variables in multielement designs. Both sets of data were analyzed via conditional probabilities to identify relevant antecedent and consequent events for subjects' SIB. Using outcomes of the experimental analysis as the standard for comparison, results indicated that the descriptive analysis was useful in identifying the extent to which SIB was related to social versus nonsocial contingencies, but was limited in its ability to distinguish between positive and negative reinforcement (i.e., attention versus escape). PMID:8407680

  20. [Sleep duration and functional limitations in older adult].

    PubMed

    Eumann Mesas, Arthur; López-García, Esther; Rodríguez-Artalejo, Fernando

    2011-04-30

    To examine the association between sleep duration and functional limitation in older adults from Spain. Cross-sectional study with 3,708 individuals representative of the non-institutionalized population aged ≥ 60 years in Spain. Sleep duration was self-reported, and the functional limitations in the instrumental activities of daily living (IADL) were assessed. Functional limitations in IADL were identified in 1,424 (38.4%) participants. In analyses adjusted for sociodemographic and lifestyle variables, the percentage of participants with limitation in IADL was higher in those who slept ≤ 5 hours (odds ratio [OR]=1.56; 95% confidence interval [CI]=1.18-2.06) or ≥ 10 hours (OR=2.08; 95%CI=1.67-2.60; p for trend<0.001) than in those who slept 8 hours. The association between long sleep (≥ 10 hours) and functional limitations held even after adjustment for comorbidity and sleep quality (OR=1.77; 95%CI=1.38-2.28) while the association between short sleep (≤ 5 hours) and functional limitation no longer held after this adjustment (OR=1.10; 95%CI=0.80-1.50). In older adults, long sleep duration is a marker of functional limitations independent of comorbidity. Copyright © 2010 Elsevier España, S.L. All rights reserved.

  1. Plastid proteomics for elucidating iron limited remodeling of plastid physiology in diatoms

    NASA Astrophysics Data System (ADS)

    Gomes, K. M.; Nunn, B. L.; Jenkins, B. D.

    2016-02-01

    Diatoms are important primary producers in the world's oceans and their growth is constrained in large regions by low iron availability. This low iron-induced limitation of primary production is due to the requirement for iron in components of essential metabolic pathways including key chloroplast functions such as photosynthesis and nitrate assimilation. Diatoms can bloom and accumulate high biomass during introduction of iron into low iron waters, indicating adaptations allowing for their survival in iron-limited waters and rapid growth when iron becomes more abundant. Prior studies have shown that under iron limited stress, diatoms alter plastid-specific processes including components of electron transport, size of light harvesting capacity and chlorophyll content, suggesting plastid-specific protein regulation. Due to their complex evolutionary history, resulting from a secondary endosymbiosis, knowledge regarding the complement of plastid localized proteins remains limited in comparison to other model photosynthetic organisms. While in-silico prediction of diatom protein localization provides putative candidates for plastid-localization, these analyses can be limited as most plastid prediction models were developed using plants, primary endosymbionts. In order to characterize proteins enriched in diatom chloroplast and to understand how the plastid proteome is remodeled in response to iron limitation, we used mass spectrometry based proteomics to compare plastid- enriched protein fractions from Thalassiosira pseudonana, grown in iron replete and limited conditions. These analyses show that iron stress alters regulation of major metabolic pathways in the plastid including the Calvin cycle and fatty acid synthesis. These components provide promising targets to further characterize the plastid specific response to iron limitation.

  2. Evidence, models, conservation programs and limits to management

    USGS Publications Warehouse

    Nichols, J.D.

    2012-01-01

    Walsh et al. (2012) emphasized the importance of obtaining evidence to assess the effects of management actions on state variables relevant to objectives of conservation programs. They focused on malleefowl Leipoa ocellata, ground-dwelling Australian megapodes listed as vulnerable. They noted that although fox Vulpes vulpes baiting is the main management action used in malleefowl conservation throughout southern Australia, evidence of the effectiveness of this action is limited and currently debated. Walsh et al. (2012) then used data from 64 sites monitored for malleefowl and foxes over 23 years to assess key functional relationships relevant to fox control as a conservation action for malleefowl. In one set of analyses, Walsh et al. (2012) focused on two relationships: fox baiting investment versus fox presence, and fox presence versus malleefowl population size and rate of population change. Results led to the counterintuitive conclusion that increases in investments in fox control produced slight decreases in malleefowl population size and growth. In a second set of analyses, Walsh et al. (2012) directly assessed the relationship between investment in fox baiting and malleefowl population size and rate of population change. This set of analyses showed no significant relationship between investment in fox population control and malleefowl population growth. Both sets of analyses benefited from the incorporation of key environmental covariates hypothesized to influence these management relationships. Walsh et al. (2012) concluded that "in most situations, malleefowl conservation did not effectively benefit from fox baiting at current levels of investment." In this commentary, I discuss the work of Walsh et al. (2012) using the conceptual framework of structured decision making (SDM). In doing so, I accept their analytic results and associated conclusions as accurate and discuss basic ideas about evidence, conservation and limits to management.

  3. Using software agents to preserve individual health data confidentiality in micro-scale geographical analyses.

    PubMed

    Kamel Boulos, Maged N; Cai, Qiang; Padget, Julian A; Rushton, Gerard

    2006-04-01

    Confidentiality constraints often preclude the release of disaggregate data about individuals, which limits the types and accuracy of the results of geographical health analyses that could be done. Access to individually geocoded (disaggregate) data often involves lengthy and cumbersome procedures through review boards and committees for approval (and sometimes is not possible). Moreover, current data confidentiality-preserving solutions compatible with fine-level spatial analyses either lack flexibility or yield less than optimal results (because of confidentiality-preserving changes they introduce to disaggregate data), or both. In this paper, we present a simulation case study to illustrate how some analyses cannot be (or will suffer if) done on aggregate data. We then quickly review some existing data confidentiality-preserving techniques, and move on to explore a solution based on software agents with the potential of providing flexible, controlled (software-only) access to unmodified confidential disaggregate data and returning only results that do not expose any person-identifiable details. The solution is thus appropriate for micro-scale geographical analyses where no person-identifiable details are required in the final results (i.e., only aggregate results are needed). Our proposed software agent technique also enables post-coordinated analyses to be designed and carried out on the confidential database(s), as needed, compared to a more conventional solution based on the Web Services model that would only support a rigid, pre-coordinated (pre-determined) and rather limited set of analyses. The paper also provides an exploratory discussion of mobility, security, and trust issues associated with software agents, as well as possible directions/solutions to address these issues, including the use of virtual organizations. Successful partnerships between stakeholder organizations, proper collaboration agreements, clear policies, and unambiguous interpretations

  4. Metagenomic and Metatranscriptomic Analyses Reveal the Structure and Dynamics of a Dechlorinating Community Containing Dehalococcoides mccartyi and Corrinoid-Providing Microorganisms under Cobalamin-Limited Conditions

    PubMed Central

    Yu, Ke; Bælum, Jacob; Gao, Ying; Tremblay, Julien; Prestat, Emmanuel; Stenuit, Ben; Tringe, Susannah G.; Jansson, Janet; Zhang, Tong; Alvarez-Cohen, Lisa

    2017-01-01

    ABSTRACT The aim of this study is to obtain a systems-level understanding of the interactions between Dehalococcoides and corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in the Veillonellaceae bin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expression when cobalamin was not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoid de novo biosynthesis pathway was also assigned to the Veillonellaceae bin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway of Dehalococcoides was upregulated in response to the cobalamin stress. This study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions. IMPORTANCE The key chloroethene-dechlorinating bacterium Dehalococcoides mccartyi is a cobalamin auxotroph, thus acquiring corrinoids from other community members. Therefore, it is important to investigate the microbe-microbe interactions between Dehalococcoides and the corrinoid-providing microorganisms in a community. This study provides

  5. Chemical Analyses of Pre-Holocene Rocks from Medicine Lake Volcano and Vicinity, Northern California

    USGS Publications Warehouse

    Donnelly-Nolan, Julie M.

    2008-01-01

    Chemical analyses are presented in an accompanying table (Table 1) for more than 600 pre-Holocene rocks collected at and near Medicine Lake Volcano, northern California. The data include major-element X-ray fluorescence (XRF) analyses for all of the rocks plus XRF trace element data for most samples, and instrumental neutron activation analysis (INAA) trace element data for many samples. In addition, a limited number of analyses of Na2O and K2O by flame photometry (FP) are included as well assome wet chemical analyses of FeO, H2O+/-, and CO2. Latitude and longitude location information is provided for all samples. This data set is intended to accompany the geologic map of Medicine Lake Volcano (Donnelly-Nolan, in press); map unit designations are given for each sample collected from the map area.

  6. A multiresolution processing method for contrast enhancement in portal imaging.

    PubMed

    Gonzalez-Lopez, Antonio

    2018-06-18

    Portal images have a unique feature among the imaging modalities used in radiotherapy: they provide direct visualization of the irradiated volumes. However, contrast and spatial resolution are strongly limited due to the high energy of the radiation sources. Because of this, imaging modalities using x-ray energy beams have gained importance in the verification of patient positioning, replacing portal imaging. The purpose of this work was to develop a method for the enhancement of local contrast in portal images. The method operates in the subbands of a wavelet decomposition of the image, re-scaling them in such a way that coefficients in the high and medium resolution subbands are amplified, an approach totally different of those operating on the image histogram, widely used nowadays. Portal images of an anthropomorphic phantom were acquired in an electronic portal imaging device (EPID). Then, different re-scaling strategies were investigated, studying the effects of the scaling parameters on the enhanced images. Also, the effect of using different types of transforms was studied. Finally, the implemented methods were combined with histogram equalization methods like the contrast limited adaptive histogram equalization (CLAHE), and these combinations were compared. Uniform amplification of the detail subbands shows the best results in contrast enhancement. On the other hand, linear re-escalation of the high resolution subbands increases the visibility of fine detail of the images, at the expense of an increase in noise levels. Also, since processing is applied only to detail subbands, not to the approximation, the mean gray level of the image is minimally modified and no further display adjustments are required. It is shown that re-escalation of the detail subbands of portal images can be used as an efficient method for the enhancement of both, the local contrast and the resolution of these images. © 2018 Institute of

  7. SAM-Like Evolved Gas Analyses of Phyllosilicate Minerals and Applications to SAM Analyses of the Sheepbed Mudstone, Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    McAdam, A. C.; Franz, H. B.; Mahaffy, P. R.; Eigenbrode, J. L.; Stern, J. C.; Brunner, B.; Sutter, B.; Archer, P. D.; Ming , D. W.; Morris, R. V.; hide

    2014-01-01

    While in Yellowknife Bay, the Mars Science Laboratory Curiosity rover collected two drilled samples, John Klein (hereafter "JK") and Cumberland ("CB"), from the Sheepbed mudstone, as well as a scooped sample from the Rocknest aeolian bedform ("RN"). These samples were sieved by Curiosity's sample processing system and then several subsamples of these materials were delivered to the Sample Analysis at Mars (SAM) instrument suite and the CheMin X-ray diffraction/X-ray fluorescence instrument. CheMin provided the first in situ X-ray diffraction-based evidence of clay minerals on Mars, which are likely trioctahedral smectites (e.g., Fe-saponite) and comprise 20 wt% of the mudstone samples [1]. SAM's evolved gas analysis (EGA) mass spectrometry analyses of JK and CB subsamples, as well as RN subsamples, detected H2O, CO2, O2, H2, SO2, H2S, HCl, NO, OCS, CS2 and other trace gases evolved during pyrolysis. The identity of evolved gases and temperature( s) of evolution can augment mineral detection by CheMin and place constraints on trace volatile-bearing phases present below the CheMin detection limit or those phases difficult to characterize with XRD (e.g., X-ray amorphous phases). Here we will focus on the SAM H2O data, in the context of CheMin analyses, and comparisons to laboratory SAM-like analyses of several phyllosilicate minerals including smectites.

  8. Income and functional limitations among the aged in Europe: a trend analysis in 16 countries.

    PubMed

    von dem Knesebeck, Olaf; Vonneilich, Nico; Lüdecke, Daniel

    2017-06-01

    Analyses are focused on 3 research questions: (1) Are there absolute and relative income-related inequalities in functional limitations among the aged in Europe? (2) Did the absolute and relative income-related inequalities in functional limitations among the aged change between 2002 and 2014? (3) Are there differences in the changes of income-related inequalities between European countries? Data stem from 7 waves (2002-2014) of the European Social Survey. Samples of people aged 60 years or older from 16 European countries were analysed (N=63 024). Inequalities were measured by means of absolute prevalence rate differences and relative prevalence rate ratios of low versus high income. Meta-analyses with random-effect models were used to study the trends of inequalities in functional limitations over time. Functional limitations among people aged 60 years or older declined between 2002 and 2014 in most of the 16 European countries. Older people with a low income had higher rates of functional limitations and elevated rate ratios compared with people with high income. These inequalities were significant in many countries and were more pronounced among men than among women. Overall, absolute and relative income-related inequalities increased between 2002 and 2014, especially in Ireland, the Netherlands and Sweden. High-income groups are more in favour of the observed overall decline in functional limitations than deprived groups. Results point to potential income-related inequalities in compression of morbidity in the recent past in Europe. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Geologist's Field Assistant: Developing Image and Spectral Analyses Algorithms for Remote Science Exploration

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.

    2002-01-01

    We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.

  10. Oral Chinese proprietary medicine for angina pectoris: an overview of systematic reviews/meta-analyses.

    PubMed

    Luo, Jing; Xu, Hao; Yang, Guoyan; Qiu, Yu; Liu, Jianping; Chen, Keji

    2014-08-01

    Oral Chinese proprietary medicine (CPM) is commonly used to treat angina pectoris, and many relevant systematic reviews/meta-analyses are available. However, these reviews have not been systematically summarized and evaluated. We conducted an overview of these reviews, and explored their methodological and reporting quality to inform both practice and further research. We included systematic reviews/meta-analyses on oral CPM in treating angina until March 2013 by searching PubMed, Embase, the Cochrane Library and four Chinese databases. We extracted data according to a pre-designed form, and assessed the methodological and reporting characteristics of the reviews in terms of AMSTAR and PRISMA respectively. Most of the data analyses were descriptive. 36 systematic reviews/meta-analyses involving over 82,105 participants with angina reviewing 13 kinds of oral CPM were included. The main outcomes assessed in the reviews were surrogate outcomes (34/36, 94.4%), adverse events (31/36, 86.1%), and symptoms (30/36, 83.3%). Six reviews (6/36, 16.7%) drew definitely positive conclusions, while the others suggested potential benefits in the symptoms, electrocardiogram, and adverse events. The overall methodological and reporting quality of the reviews was limited, with many serious flaws such as the lack of review protocol and incomprehensive literature searches. Though many systematic reviews/meta-analyses on oral CPM for angina suggested potential benefits or definitely positive effects, stakeholders should interpret the findings of these reviews with caution, considering the overall limited methodological and reporting quality. We recommend further studies should be appropriately conducted and systematic reviews reported according to PRISMA standard. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Proteomic Analyses of the Unexplored Sea Anemone Bunodactis verrucosa

    PubMed Central

    Campos, Alexandre; Turkina, Maria V.; Ribeiro, Tiago; Osorio, Hugo; Vasconcelos, Vítor; Antunes, Agostinho

    2018-01-01

    Cnidarian toxic products, particularly peptide toxins, constitute a promising target for biomedicine research. Indeed, cnidarians are considered as the largest phylum of generally toxic animals. However, research on peptides and toxins of sea anemones is still limited. Moreover, most of the toxins from sea anemones have been discovered by classical purification approaches. Recently, high-throughput methodologies have been used for this purpose but in other Phyla. Hence, the present work was focused on the proteomic analyses of whole-body extract from the unexplored sea anemone Bunodactis verrucosa. The proteomic analyses applied were based on two methods: two-dimensional gel electrophoresis combined with MALDI-TOF/TOF and shotgun proteomic approach. In total, 413 proteins were identified, but only eight proteins were identified from gel-based analyses. Such proteins are mainly involved in basal metabolism and biosynthesis of antibiotics as the most relevant pathways. In addition, some putative toxins including metalloproteinases and neurotoxins were also identified. These findings reinforce the significance of the production of antimicrobial compounds and toxins by sea anemones, which play a significant role in defense and feeding. In general, the present study provides the first proteome map of the sea anemone B. verrucosa stablishing a reference for future studies in the discovery of new compounds. PMID:29364843

  12. Proteomic Analyses of the Unexplored Sea Anemone Bunodactis verrucosa.

    PubMed

    Domínguez-Pérez, Dany; Campos, Alexandre; Alexei Rodríguez, Armando; Turkina, Maria V; Ribeiro, Tiago; Osorio, Hugo; Vasconcelos, Vítor; Antunes, Agostinho

    2018-01-24

    Cnidarian toxic products, particularly peptide toxins, constitute a promising target for biomedicine research. Indeed, cnidarians are considered as the largest phylum of generally toxic animals. However, research on peptides and toxins of sea anemones is still limited. Moreover, most of the toxins from sea anemones have been discovered by classical purification approaches. Recently, high-throughput methodologies have been used for this purpose but in other Phyla. Hence, the present work was focused on the proteomic analyses of whole-body extract from the unexplored sea anemone Bunodactis verrucosa . The proteomic analyses applied were based on two methods: two-dimensional gel electrophoresis combined with MALDI-TOF/TOF and shotgun proteomic approach. In total, 413 proteins were identified, but only eight proteins were identified from gel-based analyses. Such proteins are mainly involved in basal metabolism and biosynthesis of antibiotics as the most relevant pathways. In addition, some putative toxins including metalloproteinases and neurotoxins were also identified. These findings reinforce the significance of the production of antimicrobial compounds and toxins by sea anemones, which play a significant role in defense and feeding. In general, the present study provides the first proteome map of the sea anemone B. verrucosa stablishing a reference for future studies in the discovery of new compounds.

  13. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  14. DATA AND ANALYSES

    EPA Science Inventory

    In order to promote transparency and clarity of the analyses performed in support of EPA's Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens, the data and the analyses are now available on this web site. The data is presented in two diffe...

  15. Estimating effects of limiting factors with regression quantiles

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Schroeder, R.L.

    1999-01-01

    In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e

  16. Metagenomic and Metatranscriptomic Analyses Reveal the Structure and Dynamics of a Dechlorinating Community Containing Dehalococcoides mccartyi and Corrinoid-Providing Microorganisms under Cobalamin-Limited Conditions

    DOE PAGES

    Men, Yujie; Yu, Ke; Bælum, Jacob; ...

    2017-02-10

    The aim of this paper is to obtain a systems-level understanding of the interactions between Dehalococcoides and corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in the Veillonellaceae bin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expressionmore » when cobalamin was not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoid de novo biosynthesis pathway was also assigned to the Veillonellaceae bin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway of Dehalococcoides was upregulated in response to the cobalamin stress. Finally, this study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions.« less

  17. Metagenomic and Metatranscriptomic Analyses Reveal the Structure and Dynamics of a Dechlorinating Community Containing Dehalococcoides mccartyi and Corrinoid-Providing Microorganisms under Cobalamin-Limited Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Men, Yujie; Yu, Ke; Bælum, Jacob

    The aim of this paper is to obtain a systems-level understanding of the interactions between Dehalococcoides and corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in the Veillonellaceae bin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expressionmore » when cobalamin was not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoid de novo biosynthesis pathway was also assigned to the Veillonellaceae bin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway of Dehalococcoides was upregulated in response to the cobalamin stress. Finally, this study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions.« less

  18. Less is less: a systematic review of graph use in meta-analyses.

    PubMed

    Schild, Anne H E; Voracek, Martin

    2013-09-01

    Graphs are an essential part of scientific communication. Complex datasets, of which meta-analyses are textbook examples, benefit the most from visualization. Although a number of graph options for meta-analyses exist, the extent to which these are used was hitherto unclear. A systematic review on graph use in meta-analyses in three disciplines (medicine, psychology, and business) and nine journals was conducted. Interdisciplinary differences, which are mirrored in the respective journals, were revealed, that is, graph use correlates with external factors rather than methodological considerations. There was only limited variation in graph types (with forest plots as the most important representatives), and diagnostic plots were very rare. Although an increase in graph use over time could be observed, it is unlikely that this phenomenon is specific to meta-analyses. There is a gaping discrepancy between available graphic methods and their application in meta-analyses. This may be rooted in a number of factors, namely, (i) insufficient dissemination of new developments, (ii) unsatisfactory implementation in software packages, and (iii) minor attention on graphics in meta-analysis reporting guidelines. Using visualization methods to their full capacity is a further step in using meta-analysis to its full potential. Copyright © 2013 John Wiley & Sons, Ltd.

  19. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Limited partnerships, limited liability partnerships..., limited liability partnerships, limited liability companies, corporations, and other similar legal entities. (a) A limited partnership, limited liability partnership, limited liability company, corporation...

  20. On meta- and mega-analyses for gene–environment interactions

    PubMed Central

    Huang, Jing; Liu, Yulun; Vitale, Steve; Penning, Trevor M.; Whitehead, Alexander S.; Blair, Ian A.; Vachani, Anil; Clapper, Margie L.; Muscat, Joshua E.; Lazarus, Philip; Scheet, Paul; Moore, Jason H.; Chen, Yong

    2017-01-01

    Gene-by-environment (G × E) interactions are important in explaining the missing heritability and understanding the causation of complex diseases, but a single, moderately sized study often has limited statistical power to detect such interactions. With the increasing need for integrating data and reporting results from multiple collaborative studies or sites, debate over choice between mega- versus meta-analysis continues. In principle, data from different sites can be integrated at the individual level into a “mega” data set, which can be fit by a joint “mega-analysis.” Alternatively, analyses can be done at each site, and results across sites can be combined through a “meta-analysis” procedure without integrating individual level data across sites. Although mega-analysis has been advocated in several recent initiatives, meta-analysis has the advantages of simplicity and feasibility, and has recently led to several important findings in identifying main genetic effects. In this paper, we conducted empirical and simulation studies, using data from a G × E study of lung cancer, to compare the mega- and meta-analyses in four commonly used G × E analyses under the scenario that the number of studies is small and sample sizes of individual studies are relatively large. We compared the two data integration approaches in the context of fixed effect models and random effects models separately. Our investigations provide valuable insights in understanding the differences between mega- and meta-analyses in practice of combining small number of studies in identifying G × E interactions. PMID:29110346

  1. On meta- and mega-analyses for gene-environment interactions.

    PubMed

    Huang, Jing; Liu, Yulun; Vitale, Steve; Penning, Trevor M; Whitehead, Alexander S; Blair, Ian A; Vachani, Anil; Clapper, Margie L; Muscat, Joshua E; Lazarus, Philip; Scheet, Paul; Moore, Jason H; Chen, Yong

    2017-12-01

    Gene-by-environment (G × E) interactions are important in explaining the missing heritability and understanding the causation of complex diseases, but a single, moderately sized study often has limited statistical power to detect such interactions. With the increasing need for integrating data and reporting results from multiple collaborative studies or sites, debate over choice between mega- versus meta-analysis continues. In principle, data from different sites can be integrated at the individual level into a "mega" data set, which can be fit by a joint "mega-analysis." Alternatively, analyses can be done at each site, and results across sites can be combined through a "meta-analysis" procedure without integrating individual level data across sites. Although mega-analysis has been advocated in several recent initiatives, meta-analysis has the advantages of simplicity and feasibility, and has recently led to several important findings in identifying main genetic effects. In this paper, we conducted empirical and simulation studies, using data from a G × E study of lung cancer, to compare the mega- and meta-analyses in four commonly used G × E analyses under the scenario that the number of studies is small and sample sizes of individual studies are relatively large. We compared the two data integration approaches in the context of fixed effect models and random effects models separately. Our investigations provide valuable insights in understanding the differences between mega- and meta-analyses in practice of combining small number of studies in identifying G × E interactions. © 2017 WILEY PERIODICALS, INC.

  2. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer: Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses.

    PubMed

    Khankari, Nikhil K; Shu, Xiao-Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I; Easton, Douglas; Eeles, Rosalind A; Gruber, Stephen B; Haiman, Christopher A; Hunter, David J; Chanock, Stephen J; Pierce, Brandon L; Zheng, Wei

    2016-09-01

    Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using height-associated genetic variants identified in a genome-wide association study (GWAS), to evaluate the association of adult height with these cancers. A systematic review of prospective studies was conducted using the PubMed, Embase, and Web of Science databases. Using meta-analyses, results obtained from 62 studies were summarized for the association of a 10-cm increase in height with cancer risk. Mendelian randomization analyses were conducted using summary statistics obtained for 423 genetic variants identified from a recent GWAS of adult height and from a cancer genetics consortium study of multiple cancers that included 47,800 cases and 81,353 controls. For a 10-cm increase in height, the summary relative risks derived from the meta-analyses of prospective studies were 1.12 (95% CI 1.10, 1.15), 1.07 (95% CI 1.05, 1.10), and 1.06 (95% CI 1.02, 1.11) for colorectal, prostate, and lung cancers, respectively. Mendelian randomization analyses showed increased risks of colorectal (odds ratio [OR] = 1.58, 95% CI 1.14, 2.18) and lung cancer (OR = 1.10, 95% CI 1.00, 1.22) associated with each 10-cm increase in genetically predicted height. No association was observed for prostate cancer (OR = 1.03, 95% CI 0.92, 1.15). Our meta-analysis was limited to published studies. The sample size for the Mendelian randomization analysis of colorectal cancer was relatively small, thus affecting the precision of the point estimate. Our study provides evidence for a potential causal association of adult height with the risk of colorectal and lung cancers and suggests that certain genetic factors and biological pathways affecting adult height may also affect the risk of these cancers.

  3. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer: Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses

    PubMed Central

    Khankari, Nikhil K.; Shu, Xiao-Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I.; Easton, Douglas; Gruber, Stephen B.; Haiman, Christopher A.; Hunter, David J.; Chanock, Stephen J.; Pierce, Brandon L.; Zheng, Wei

    2016-01-01

    Background Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using height-associated genetic variants identified in a genome-wide association study (GWAS), to evaluate the association of adult height with these cancers. Methods and Findings A systematic review of prospective studies was conducted using the PubMed, Embase, and Web of Science databases. Using meta-analyses, results obtained from 62 studies were summarized for the association of a 10-cm increase in height with cancer risk. Mendelian randomization analyses were conducted using summary statistics obtained for 423 genetic variants identified from a recent GWAS of adult height and from a cancer genetics consortium study of multiple cancers that included 47,800 cases and 81,353 controls. For a 10-cm increase in height, the summary relative risks derived from the meta-analyses of prospective studies were 1.12 (95% CI 1.10, 1.15), 1.07 (95% CI 1.05, 1.10), and 1.06 (95% CI 1.02, 1.11) for colorectal, prostate, and lung cancers, respectively. Mendelian randomization analyses showed increased risks of colorectal (odds ratio [OR] = 1.58, 95% CI 1.14, 2.18) and lung cancer (OR = 1.10, 95% CI 1.00, 1.22) associated with each 10-cm increase in genetically predicted height. No association was observed for prostate cancer (OR = 1.03, 95% CI 0.92, 1.15). Our meta-analysis was limited to published studies. The sample size for the Mendelian randomization analysis of colorectal cancer was relatively small, thus affecting the precision of the point estimate. Conclusions Our study provides evidence for a potential causal association of adult height with the risk of colorectal and lung cancers and suggests that certain genetic factors and biological pathways affecting adult height may also affect the

  4. Multiscale wavelet representations for mammographic feature analysis

    NASA Astrophysics Data System (ADS)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  5. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  6. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    PubMed Central

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  7. The low noise limit in gene expression

    DOE PAGES

    Dar, Roy D.; Weinberger, Leor S.; Cox, Chris D.; ...

    2015-10-21

    Protein noise measurements are increasingly used to elucidate biophysical parameters. Unfortunately noise analyses are often at odds with directly measured parameters. Here we show that these inconsistencies arise from two problematic analytical choices: (i) the assumption that protein translation rate is invariant for different proteins of different abundances, which has inadvertently led to (ii) the assumption that a large constitutive extrinsic noise sets the low noise limit in gene expression. While growing evidence suggests that transcriptional bursting may set the low noise limit, variability in translational bursting has been largely ignored. We show that genome-wide systematic variation in translational efficiencymore » can-and in the case of E. coli does-control the low noise limit in gene expression. Therefore constitutive extrinsic noise is small and only plays a role in the absence of a systematic variation in translational efficiency. Lastly, these results show the existence of two distinct expression noise patterns: (1) a global noise floor uniformly imposed on all genes by expression bursting; and (2) high noise distributed to only a select group of genes.« less

  8. Relationship of employee-reported work limitations to work productivity.

    PubMed

    Lerner, Debra; Amick, Benjamin C; Lee, Jennifer C; Rooney, Ted; Rogers, William H; Chang, Hong; Berndt, Ernst R

    2003-05-01

    Work limitation rates are crucial indicators of the health status of working people. If related to work productivity, work limitation rates may also supply important information about the economic burden of illness. Our objective was to assess the productivity impact of on-the-job work limitations due to employees' physical or mental health problems. Subjects were asked to complete a self-administered survey on the job during 3 consecutive months. Using robust regression analysis, we tested the relationship of objectively-measured work productivity to employee-reported work limitations. We attempted to survey employees of a large firm within 3 different jobs. The survey response rate was 2245 (85.9%). Full survey and productivity data were available for 1827 respondents. Each survey included a validated self-report instrument, the Work Limitations Questionnaire (WLQ). The firm provided objective, employee-level work productivity data. In adjusted regression analyses (n = 1827), employee work productivity (measured as the log of units produced/hour) was significantly associated with 3 dimensions of work limitations: limitations handling the job's time and scheduling demands (P = 0.003), physical job demands (P = 0.001), and output demands (P = 0.006). For every 10% increase in on-the-job work limitations reported on each of the 3 WLQ scales, work productivity declined approximately 4 to 5%. Employee work limitations have a negative impact on work productivity. Employee assessments of their work limitations supply important proxies for the economic burden of health problems.

  9. [Does medicine limit enlightenment?].

    PubMed

    Schipperges, H

    1977-01-01

    In the first, historical part the most important programs of "Medical Enlightenment", are pointed out, beginning with Leibniz, followed by the public health movement of the 18th century, up to the time of Immanuel Kant. Based on this historical background several concepts of a "Medical Culture" are analysed in detail, for instance the "Theorie einer Medizinal-Ordnung" by Johann Benjamin Ehrhard (1800), the "Medicinische Reform" by Rudolf Virchow (1848) and the programs of the "Gesellschaft Deutscher Naturforscher und Arzte" (about 1850-1890), the latter bearing both scientific and political character. Following the historical part, the question is raised whether "Enlightenment" is limited by medicine and whether medicine is able to provide a program for individual health education resulting in a more cultivated style of private life, and lastly how this might be realized.

  10. Limiting electric fields of HVDC overhead power lines.

    PubMed

    Leitgeb, N

    2014-05-01

    As a consequence of the increased use of renewable energy and the now long distances between energy generation and consumption, in Europe, electric power transfer by high-voltage (HV) direct current (DC) overhead power lines gains increasing importance. Thousands of kilometers of them are going to be built within the next years. However, existing guidelines and regulations do not yet contain recommendations to limit static electric fields, which are one of the most important criteria for HVDC overhead power lines in terms of tower design, span width and ground clearance. Based on theoretical and experimental data, in this article, static electric fields associated with adverse health effects are analysed and various criteria are derived for limiting static electric field strengths.

  11. Immunophenotyping of posttraumatic neutrophils on a routine haematology analyser.

    PubMed

    Groeneveld, Kathelijne Maaike; Heeres, Marjolein; Leenen, Loek Petrus Hendrikus; Huisman, Albert; Koenderman, Leo

    2012-01-01

    Flow cytometry markers have been proposed as useful predictors for the occurrence of posttraumatic inflammatory complications. However, currently the need for a dedicated laboratory and the labour-intensive analytical procedures make these markers less suitable for clinical practice. We tested an approach to overcome these limitations. Neutrophils of healthy donors were incubated with antibodies commonly used in trauma research: CD11b (MAC-1), L-selectin (CD62L), FcγRIII (CD16), and FcγRII (CD32) in active form (MoPhab A27). Flow cytometric analysis was performed both on a FACSCalibur, a standard flow cytometer, and on a Cell-Dyn Sapphire, a routine haematology analyser. There was a high level of agreement between the two types of analysers, with 41% for FcγRIII, 80% for L-selectin, 98% for CD11b, and even a 100% agreement for active FcγRII. Moreover, analysis on the routine haematology analyser was possible in less than a quarter of the time in comparison to the flow cytometer. Analysis of neutrophil phenotype on the Cell-Dyn Sapphire leads to the same conclusion compared to a standard flow cytometer. The markedly reduced time necessary for analysis and reduced labour intensity constitutes a step forward in implementation of this type of analysis in clinical diagnostics in trauma research. Copyright © 2012 Kathelijne Maaike Groeneveld et al.

  12. Limiting depth of magnetization in cratonic lithosphere

    NASA Technical Reports Server (NTRS)

    Toft, Paul B.; Haggerty, Stephen E.

    1988-01-01

    Values of magnetic susceptibility and natural remanent magnetization (NRM) of clino-pyroxene-garnet-plagioclase granulite facies lower crustal xenoliths from a kimberlite in west Africa are correlated to bulk geochemistry and specific gravity. Thermomagnetic and alternating-field demagnetization analyses identify magnetite (Mt) and native iron as the dominant magnetic phases (totaling not more than 0.1 vol pct of the rocks) along with subsidiary sulfides. Oxidation states of the granulites are not greater than MW, observed Mt occurs as rims on coarse (about 1 micron) Fe particles, and inferred single domain-pseudosingle domain Mt may be a result of oxidation of fine-grained Fe. The deepest limit of lithospheric ferromagnetism is 95 km, but a limit of 70 km is most reasonable for the West African Craton and for modeling Magsat anomalies over exposed Precambrian shields.

  13. Insights into vehicle trajectories at the handling limits: analysing open data from race car drivers

    NASA Astrophysics Data System (ADS)

    Kegelman, John C.; Harbott, Lene K.; Gerdes, J. Christian

    2017-02-01

    Race car drivers can offer insights into vehicle control during extreme manoeuvres; however, little data from race teams is publicly available for analysis. The Revs Program at Stanford has built a collection of vehicle dynamics data acquired from vintage race cars during live racing events with the intent of making this database publicly available for future analysis. This paper discusses the data acquisition, post-processing, and storage methods used to generate the database. An analysis of available data quantifies the repeatability of professional race car driver performance by examining the statistical dispersion of their driven paths. Certain map features, such as sections with high path curvature, consistently corresponded to local minima in path dispersion, quantifying the qualitative concept that drivers anchor their racing lines at specific locations around the track. A case study explores how two professional drivers employ distinct driving styles to achieve similar lap times, supporting the idea that driving at the limits allows a family of solutions in terms of paths and speed that can be adapted based on specific spatial, temporal, or other constraints and objectives.

  14. Traffic safety effects of new speed limits in Sweden.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2018-05-01

    The effects of speed, both positive and negative, make speed a primary target for policy action. Driving speeds affect the risk of being involved in a crash and the injury severity as well as the noise and exhaust emissions. Starting 2008, the Swedish Transport Administration performed a review of the speed limits on the national rural road network. This review resulted in major changes of the speed limits on the rural road network. It was predominantly roads with a low traffic safety standard and unsatisfactory road sides that were selected for reduced speed limits, as well as roads with a good traffic safety record being selected for an increase in speed limits. During 2008 and 2009, speed limit changed on approximately 20,500km of roads, out of which approximately 2700km were assigned an increase, and 17,800km were assigned a reduction in speed limits. The aim of this study is predominantly to describe and analyse the longterm traffic safety effect of increased, as well as, reduced speed limits, but also to analyse the changes in actual driving speeds due to the changed speed limits. Traffic safety effects are investigated by means of a before and after study with control group and the effects on actual mean speeds are measured by a sampling survey in which speed was measured at randomly selected sites before and after the speed limit changes. Results show a reduction in fatalities on rural roads with reduced speed limit from 90 to 80km/h where the number of fatalities decreased by 14 per year, while no significant changes were seen for the seriously injured. On motorways with an increased speed limit to 120km/h, the number of seriously injured increased by about 15 per year, but no significant changes were seen for the number of deaths. The number of seriously injured increased on all types of motorways, but the worst development was seen for narrow motorways (21.5m wide). For 2+1 roads (a continuous three-lane cross-section with alternating passing lanes and

  15. Phylogenetic Analyses: A Toolbox Expanding towards Bayesian Methods

    PubMed Central

    Aris-Brosou, Stéphane; Xia, Xuhua

    2008-01-01

    The reconstruction of phylogenies is becoming an increasingly simple activity. This is mainly due to two reasons: the democratization of computing power and the increased availability of sophisticated yet user-friendly software. This review describes some of the latest additions to the phylogenetic toolbox, along with some of their theoretical and practical limitations. It is shown that Bayesian methods are under heavy development, as they offer the possibility to solve a number of long-standing issues and to integrate several steps of the phylogenetic analyses into a single framework. Specific topics include not only phylogenetic reconstruction, but also the comparison of phylogenies, the detection of adaptive evolution, and the estimation of divergence times between species. PMID:18483574

  16. Limits in decision making arise from limits in memory retrieval

    PubMed Central

    Giguère, Gyslain; Love, Bradley C.

    2013-01-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  17. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-07

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

  18. Disaggregating pain and its effect on physical functional limitations.

    PubMed

    Lichtenstein, M J; Dhanda, R; Cornell, J E; Escalante, A; Hazuda, H P

    1998-09-01

    Pain is a common impairment that limits the abilities of older persons. The purposes of this article are to: (i) describe the distribution of pain location using the McGill Pain Map (MPM) in a community-based cohort of aged subjects; (ii) investigate whether individual areas of pain could be sensibly grouped into regions of pain; (iii) determine whether intensity, frequency, and location constitute independent dimensions of pain; and (iv) determine whether these three pain dimensions make differential contributions to the presence of self-reported physical functional limitations. A total of 833 Mexican American and European American subjects, aged 65-79 years, were enrolled in the San Antonio Longitudinal Study of Aging and were interviewed in their homes between 1992 and 1996. A total of 373 (46%) of the subjects reported having pain in the past week. Physical functional limitations were ascertained using the nine items from the Nagi scale. Three composite scales were created: upper extremity, lower extremity, and total. Pain intensity and frequency were ascertained using the McGill Pain Questionnaire. Pain location was ascertained by using the MPM. Pain was reported in every area of the MPM. Using multiple groups confirmatory factor analysis, the 36 areas were grouped into 7 regions of pain: head, arms, hands and wrists, trunk, back, upper leg, and lower leg. Among persons with pain, pain frequency, intensity, and location were weakly associated with each other. Pain regions were primarily independent of each other, yet weak associations existed between 6 of the 21 pair-wise correlations between regions. Pain regions were differentially associated with individual physical functional limitations. Pain in the upper leg was associated with 8 of the 9 physical tasks. In multivariate analyses, age, gender, and ethnic group accounted for only 2-3% of the variance in physical tasks. In multivariate analyses, age, gender, and ethnic group accounted for only 2-3% of the

  19. Concordance of Results from Randomized and Observational Analyses within the Same Study: A Re-Analysis of the Women's Health Initiative Limited-Access Dataset.

    PubMed

    Bolland, Mark J; Grey, Andrew; Gamble, Greg D; Reid, Ian R

    2015-01-01

    Observational studies (OS) and randomized controlled trials (RCTs) often report discordant results. In the Women's Health Initiative Calcium and Vitamin D (WHI CaD) RCT, women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements, creating a unique opportunity to compare results from randomized and observational analyses within the same study. WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design- comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements- and in a observational design- comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was ≤0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population. In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariate-controlled observational analyses (age-adjusted, multivariate-adjusted, propensity-adjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized results, but only 1/3 other covariate-controlled observational results were concordant with randomized results. Multivariate-adjusted results from WHI OS were concordant with randomized WHI CaD results for only 4/8 endpoints. Results of randomized analyses in WHI CaD were concordant

  20. Improving small-angle X-ray scattering data for structural analyses of the RNA world

    PubMed Central

    Rambo, Robert P.; Tainer, John A.

    2010-01-01

    Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957

  1. Predictors of parents' intention to limit children's television viewing.

    PubMed

    Bleakley, Amy; Piotrowski, Jessica Taylor; Hennessy, Michael; Jordan, Amy

    2013-12-01

    Scientific evidence demonstrates a link between viewing time and several poor health outcomes. We use a reasoned action approach to identify the determinants and beliefs associated with parents' intention to limit their children's television viewing. We surveyed a random sample of 516 caregivers to children ages 3-16 in a large Northeastern city. Multiple regression analyses were used to test a reasoned action model and examine the differences across demographic groups. The intention to limit viewing (-3 to 3) was low among parents of adolescents (M: 0.05) compared with parents of 3-6 year olds (M: 1.49) and 7-12 year olds (M: 1.16). Attitudes were the strongest determinant of intention (β = 0.43) across all demographic groups and normative pressure was also significantly related to intention (β = 0.20), except among parents of adolescents. Relevant beliefs associated with intention to limit viewing included: limiting television would be associated with the child exercising more, doing better in school, talking to family more and having less exposure to inappropriate content. Attitudes and normative pressure play an important role in determining parents' intention to limit their child's television viewing. The beliefs that were associated with parents' intention to limit should be emphasized by health professionals and in health communication campaigns.

  2. DAMAGE ASSESSMENT OF RC BEAMS BY NONLINEAR FINITE ELEMENT ANALYSES

    NASA Astrophysics Data System (ADS)

    Saito, Shigehiko; Maki, Takeshi; Tsuchiya, Satoshi; Watanabe, Tadatomo

    This paper presents damage assessment schemes by using 2-dimensional nonlinear finite element analyses. The second strain invariant of deviatoric strain tensor and consumed strain energy are calculated by local strain at each integration po int of finite elements. Those scalar values are averaged over certain region. The produced nonlocal values are used for indices to verify structural safety by confirming which the ultimate limit state for failure is reached or not. Flexural and shear failure of reinforced concrete beams are estimated by us ing the proposed indices.

  3. False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis: an empirical review.

    PubMed

    Imberger, Georgina; Thorlund, Kristian; Gluud, Christian; Wetterslev, Jørn

    2016-08-12

    Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  5. On the zero-Rossby limit for the primitive equations of the atmosphere*

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Zhang, Ping

    2001-09-01

    The zero-Rossby limit for the primitive equations governing atmospheric motions is analysed. The limit is important in geophysics for large-scale models (cf Lions 1996 Int. Conf. IAM 95 (Hamburg 1995) (Math. Res. vol 87) (Berlin: Akademie) pp 177-212) and is in the level of the zero relaxation limit for nonlinear partial differential equations (cf Chen et al 1994 Commun. Pure Appl. Math. 47 787-830). It is proved that, if the initial data appropriately approximate data of geostrophic type, the corresponding solutions of the simplified primitive equations approximate the solutions of the quasigeostrophic equations with order ɛ accuracy as the Rossby number ɛ goes to zero.

  6. The effects of phosphorus limitation on carbon metabolism in diatoms.

    PubMed

    Brembu, Tore; Mühlroth, Alice; Alipanah, Leila; Bones, Atle M

    2017-09-05

    Phosphorus is an essential element for life, serving as an integral component of nucleic acids, lipids and a diverse range of other metabolites. Concentrations of bioavailable phosphorus are low in many aquatic environments. Microalgae, including diatoms, apply physiological and molecular strategies such as phosphorus scavenging or recycling as well as adjusting cell growth in order to adapt to limiting phosphorus concentrations. Such strategies also involve adjustments of the carbon metabolism. Here, we review the effect of phosphorus limitation on carbon metabolism in diatoms. Two transcriptome studies are analysed in detail, supplemented by other transcriptome, proteome and metabolite data, to gain an overview of different pathways and their responses. Phosphorus, nitrogen and silicon limitation responses are compared, and similarities and differences discussed. We use the current knowledge to propose a suggestive model for the carbon flow in phosphorus-replete and phosphorus-limited diatom cells.This article is part of the themed issue 'The peculiar carbon metabolism in diatoms'. © 2017 The Authors.

  7. The effects of phosphorus limitation on carbon metabolism in diatoms

    PubMed Central

    Alipanah, Leila

    2017-01-01

    Phosphorus is an essential element for life, serving as an integral component of nucleic acids, lipids and a diverse range of other metabolites. Concentrations of bioavailable phosphorus are low in many aquatic environments. Microalgae, including diatoms, apply physiological and molecular strategies such as phosphorus scavenging or recycling as well as adjusting cell growth in order to adapt to limiting phosphorus concentrations. Such strategies also involve adjustments of the carbon metabolism. Here, we review the effect of phosphorus limitation on carbon metabolism in diatoms. Two transcriptome studies are analysed in detail, supplemented by other transcriptome, proteome and metabolite data, to gain an overview of different pathways and their responses. Phosphorus, nitrogen and silicon limitation responses are compared, and similarities and differences discussed. We use the current knowledge to propose a suggestive model for the carbon flow in phosphorus-replete and phosphorus-limited diatom cells. This article is part of the themed issue ‘The peculiar carbon metabolism in diatoms’. PMID:28717016

  8. 40 CFR 63.7530 - How do I demonstrate initial compliance with the emission limitations, fuel specifications and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... compliance with the emission limitations, fuel specifications and work practice standards? 63.7530 Section 63... Institutional Boilers and Process Heaters Testing, Fuel Analyses, and Initial Compliance Requirements § 63.7530 How do I demonstrate initial compliance with the emission limitations, fuel specifications and work...

  9. 40 CFR 63.7530 - How do I demonstrate initial compliance with the emission limitations, fuel specifications and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... compliance with the emission limitations, fuel specifications and work practice standards? 63.7530 Section 63... Institutional Boilers and Process Heaters Testing, Fuel Analyses, and Initial Compliance Requirements § 63.7530 How do I demonstrate initial compliance with the emission limitations, fuel specifications and work...

  10. Confirming criticality safety of TRU waste with neutron measurements and risk analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winn, W.G.; Hochel, R.D.

    1992-04-01

    The criticality safety of {sup 239}Pu in 55-gallon drums stored in TRU waste containers (culverts) is confirmed using NDA neutron measurements and risk analyses. The neutron measurements yield a {sup 239}Pu mass and k{sub eff} for a culvert, which contains up to 14 drums. Conservative probabilistic risk analyses were developed for both drums and culverts. Overall {sup 239}Pu mass estimates are less than a calculated safety limit of 2800 g per culvert. The largest measured k{sub eff} is 0.904. The largest probability for a critical drum is 6.9 {times} 10{sup {minus}8} and that for a culvert is 1.72 {times} 10{supmore » {minus}7}. All examined suspect culverts, totaling 118 in number, are appraised as safe based on these observations.« less

  11. On some limitations on temporal resolution in imaging subpicosecond photoelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shchelev, M Ya; Andreev, S V; Degtyareva, V P

    2015-05-31

    Numerical modelling is used to analyse some effects restricting the enhancement of temporal resolution into the area better than 100 fs in streak image tubes and photoelectron guns. A particular attention is paid to broadening of an electron bunch as a result of Coulomb interaction. Possible ways to overcome the limitations under consideration are discussed. (extreme light fields and their applications)

  12. Negative Effects of Reward on Intrinsic Motivation--A Limited Phenomenon: Comment on Deci, Koestner, and Ryan (2001).

    ERIC Educational Resources Information Center

    Cameron, Judy

    2001-01-01

    Prior meta analyses by J. Cameron and other researchers suggested that the negative effects of extrinsic reward on intrinsic motivation were limited and avoidable. E. Deci and others (2001) suggested that the analyses were flawed. This commentary makes the case that there is no inherent negative property of reward. (SLD)

  13. Uncertainty in Operational Atmospheric Analyses and Re-Analyses

    NASA Astrophysics Data System (ADS)

    Langland, R.; Maue, R. N.

    2016-12-01

    This talk will describe uncertainty in atmospheric analyses of wind and temperature produced by operational forecast models and in re-analysis products. Because the "true" atmospheric state cannot be precisely quantified, there is necessarily error in every atmospheric analysis, and this error can be estimated by computing differences ( variance and bias) between analysis products produced at various centers (e.g., ECMWF, NCEP, U.S Navy, etc.) that use independent data assimilation procedures, somewhat different sets of atmospheric observations and forecast models with different resolutions, dynamical equations, and physical parameterizations. These estimates of analysis uncertainty provide a useful proxy to actual analysis error. For this study, we use a unique multi-year and multi-model data archive developed at NRL-Monterey. It will be shown that current uncertainty in atmospheric analyses is closely correlated with the geographic distribution of assimilated in-situ atmospheric observations, especially those provided by high-accuracy radiosonde and commercial aircraft observations. The lowest atmospheric analysis uncertainty is found over North America, Europe and Eastern Asia, which have the largest numbers of radiosonde and commercial aircraft observations. Analysis uncertainty is substantially larger (by factors of two to three times) in most of the Southern hemisphere, the North Pacific ocean, and under-developed nations of Africa and South America where there are few radiosonde or commercial aircraft data. It appears that in regions where atmospheric analyses depend primarily on satellite radiance observations, analysis uncertainty of both temperature and wind remains relatively high compared to values found over North America and Europe.

  14. Signal Transduction Pathways of TNAP: Molecular Network Analyses.

    PubMed

    Négyessy, László; Györffy, Balázs; Hanics, János; Bányai, Mihály; Fonta, Caroline; Bazsó, Fülöp

    2015-01-01

    Despite the growing body of evidence pointing on the involvement of tissue non-specific alkaline phosphatase (TNAP) in brain function and diseases like epilepsy and Alzheimer's disease, our understanding about the role of TNAP in the regulation of neurotransmission is severely limited. The aim of our study was to integrate the fragmented knowledge into a comprehensive view regarding neuronal functions of TNAP using objective tools. As a model we used the signal transduction molecular network of a pyramidal neuron after complementing with TNAP related data and performed the analysis using graph theoretic tools. The analyses show that TNAP is in the crossroad of numerous pathways and therefore is one of the key players of the neuronal signal transduction network. Through many of its connections, most notably with molecules of the purinergic system, TNAP serves as a controller by funnelling signal flow towards a subset of molecules. TNAP also appears as the source of signal to be spread via interactions with molecules involved among others in neurodegeneration. Cluster analyses identified TNAP as part of the second messenger signalling cascade. However, TNAP also forms connections with other functional groups involved in neuronal signal transduction. The results indicate the distinct ways of involvement of TNAP in multiple neuronal functions and diseases.

  15. Multiple fingerprinting analyses in quality control of Cassiae Semen polysaccharides.

    PubMed

    Cheng, Jing; He, Siyu; Wan, Qiang; Jing, Pu

    2018-03-01

    Quality control issue overshadows potential health benefits of Cassiae Semen due to the analytic limitations. In this study, multiple-fingerprint analysis integrated with several chemometrics was performed to assess the polysaccharide quality of Cassiae Semen harvested from different locations. FT-IR, HPLC, and GC fingerprints of polysaccharide extracts from the authentic source were established as standard profiles, applying to assess the quality of foreign sources. Analyses of FT-IR fingerprints of polysaccharide extracts using either Pearson correlation analysis or principal component analysis (PCA), or HPLC fingerprints of partially hydrolyzed polysaccharides with PCA, distinguished the foreign sources from the authentic source. However, HPLC or GC fingerprints of completely hydrolyzed polysaccharides couldn't identify all foreign sources and the methodology using GC is quite limited in determining the monosaccharide composition. This indicates that FT-IR/HPLC fingerprints of non/partially-hydrolyzed polysaccharides, respectively, accompanied by multiple chemometrics methods, might be potentially applied in detecting and differentiating sources of Cassiae Semen. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Limits to behavioral evolution: the quantitative genetics of a complex trait under directional selection.

    PubMed

    Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore

    2013-11-01

    Replicated selection experiments provide a powerful way to study how "multiple adaptive solutions" may lead to differences in the quantitative-genetic architecture of selected traits and whether this may translate into differences in the timing at which evolutionary limits are reached. We analyze data from 31 generations (n=17,988) of selection on voluntary wheel running in house mice. The rate of initial response, timing of selection limit, and height of the plateau varied significantly between sexes and among the four selected lines. Analyses of litter size and realized selection differentials seem to rule out counterposing natural selection as a cause of the selection limits. Animal-model analyses showed that although the additive genetic variance was significantly lower in selected than control lines, both before and after the limits, the decrease was not sufficient to explain the limits. Moreover, directional selection promoted a negative covariance between additive and maternal genetic variance over the first 10 generations. These results stress the importance of replication in selection studies of higher-level traits and highlight the fact that long-term predictions of response to selection are not necessarily expected to be linear because of the variable effects of selection on additive genetic variance and maternal effects. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  17. Some Observations on Damage Tolerance Analyses in Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Dawicke, David S.; Hampton, Roy W.

    2017-01-01

    AIAA standards S080 and S081 are applicable for certification of metallic pressure vessels (PV) and composite overwrap pressure vessels (COPV), respectively. These standards require damage tolerance analyses with a minimum reliable detectible flaw/crack and demonstration of safe life four times the service life with these cracks at the worst-case location in the PVs and oriented perpendicular to the maximum principal tensile stress. The standards require consideration of semi-elliptical surface cracks in the range of aspect ratios (crack depth a to half of the surface length c, i.e., (a/c) of 0.2 to 1). NASA-STD-5009 provides the minimum reliably detectible standard crack sizes (90/95 probability of detection (POD) for several non-destructive evaluation (NDE) methods (eddy current (ET), penetrant (PT), radiography (RT) and ultrasonic (UT)) for the two limits of the aspect ratio range required by the AIAA standards. This paper tries to answer the questions: can the safe life analysis consider only the life for the crack sizes at the two required limits, or endpoints, of the (a/c) range for the NDE method used or does the analysis need to consider values within that range? What would be an appropriate method to interpolate 90/95 POD crack sizes at intermediate (a/c) values? Several procedures to develop combinations of a and c within the specified range are explored. A simple linear relationship between a and c is chosen to compare the effects of seven different approaches to determine combinations of aj and cj that are between the (a/c) endpoints. Two of the seven are selected for evaluation: Approach I, the simple linear relationship, and a more conservative option, Approach III. For each of these two Approaches, the lives are computed for initial semi-elliptic crack configurations in a plate subjected to remote tensile fatigue loading with an R-ratio of 0.1, for an assumed material evaluated using NASGRO (registered 4) version 8.1. These calculations demonstrate

  18. The Limits of Natural Selection in a Nonequilibrium World.

    PubMed

    Brandvain, Yaniv; Wright, Stephen I

    2016-04-01

    Evolutionary theory predicts that factors such as a small population size or low recombination rate can limit the action of natural selection. The emerging field of comparative population genomics offers an opportunity to evaluate these hypotheses. However, classical theoretical predictions assume that populations are at demographic equilibrium. This assumption is likely to be violated in the very populations researchers use to evaluate selection's limits: populations that have experienced a recent shift in population size and/or effective recombination rates. Here we highlight theory and data analyses concerning limitations on the action of natural selection in nonequilibrial populations and argue that substantial care is needed to appropriately test whether species and populations show meaningful differences in selection efficacy. A move toward model-based inferences that explicitly incorporate nonequilibrium dynamics provides a promising approach to more accurately contrast selection efficacy across populations and interpret its significance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Community detection for fluorescent lifetime microscopy image segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar

    2014-03-01

    Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.

  20. A novel rail defect detection method based on undecimated lifting wavelet packet transform and Shannon entropy-improved adaptive line enhancer

    NASA Astrophysics Data System (ADS)

    Hao, Qiushi; Zhang, Xin; Wang, Yan; Shen, Yi; Makis, Viliam

    2018-07-01

    Acoustic emission (AE) technology is sensitive to subliminal rail defects, however strong wheel-rail contact rolling noise under high-speed condition has gravely impeded detecting of rail defects using traditional denoising methods. In this context, the paper develops an adaptive detection method for rail cracks, which combines multiresolution analysis with an improved adaptive line enhancer (ALE). To obtain elaborate multiresolution information of transient crack signals with low computational cost, lifting scheme-based undecimated wavelet packet transform is adopted. In order to feature the impulsive property of crack signals, a Shannon entropy-improved ALE is proposed as a signal enhancing approach, where Shannon entropy is introduced to improve the cost function. Then a rail defect detection plan based on the proposed method for high-speed condition is put forward. From theoretical analysis and experimental verification, it is demonstrated that the proposed method has superior performance in enhancing the rail defect AE signal and reducing the strong background noise, offering an effective multiresolution approach for rail defect detection under high-speed and strong-noise condition.

  1. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  2. Colorado River sediment transport: 1. Natural sediment supply limitation and the influence of Glen Canyon Dam

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Vierra, L.E.

    2000-01-01

    Analyses of flow, sediment‐transport, bed‐topographic, and sedimentologic data suggest that before the closure of Glen Canyon Dam in 1963, the Colorado River in Marble and Grand Canyons was annually supply‐limited with respect to fine sediment (i.e., sand and finer material). Furthermore, these analyses suggest that the predam river in Glen Canyon was not supply‐limited to the same degree and that the degree of annual supply limitation increased near the head of Marble Canyon. The predam Colorado River in Grand Canyon displays evidence of four effects of supply limitation: (1) seasonal hysteresis in sediment concentration, (2) seasonal hysteresis in sediment grain size coupled to the seasonal hysteresis in sediment concentration, (3) production of inversely graded flood deposits, and (4∥ development or modification of a lag between the time of a flood peak and the time of either maximum or minimum (depending on reach geometry) bed elevation. Analyses of sediment budgets provide additional support for the interpretation that the predam river was annually supply‐limited with respect to fine sediment, but it was not supply‐limited with respect to fine sediment during all seasons. In the average predam year, sand would accumulate and be stored in Marble Canyon and upper Grand Canyon for 9 months of the year (from July through March) when flows were dominantly below 200–300 m3/s; this stored sand was then eroded during April through June when flows were typically higher. After closure of Glen Canyon Dam, because of the large magnitudes of the uncertainties in the sediment budget, no season of substantial sand accumulation is evident. Because most flows in the postdam river exceed 200–300 m3/s, substantial sand accumulation in the postdam river is unlikely.

  3. A systematic review of workplace ergonomic interventions with economic analyses.

    PubMed

    Tompa, Emile; Dolinschi, Roman; de Oliveira, Claire; Amick, Benjamin C; Irvin, Emma

    2010-06-01

    This article reports on a systematic review of workplace ergonomic interventions with economic evaluations. The review sought to answer the question: "what is the credible evidence that incremental investment in ergonomic interventions is worth undertaking?" Past efforts to synthesize evidence from this literature have focused on effectiveness, whereas this study synthesizes evidence on the cost-effectiveness/financial merits of such interventions. Through a structured journal database search, 35 intervention studies were identified in nine industrial sectors. A qualitative synthesis approach, known as best evidence synthesis, was used rather than a quantitative approach because of the diversity of study designs and statistical analyses found across studies. Evidence on the financial merits of interventions was synthesized by industrial sector. In the manufacturing and warehousing sector strong evidence was found in support of the financial merits of ergonomic interventions from a firm perspective. In the administrative support and health care sectors moderate evidence was found, in the transportation sector limited evidence, and in remaining sectors insufficient evidence. Most intervention studies focus on effectiveness. Few consider their financial merits. Amongst the few that do, several had exemplary economic analyses, although more than half of the studies had low quality economic analyses. This may be due to the low priority given to economic analysis in this literature. Often only a small part of the overall evaluation of many studies focused on evaluating their cost-effectiveness.

  4. Climate and the northern distribution limits of Dendroctonus frontalis Zimmerman (Coleoptera: Scolytidae)

    Treesearch

    M.J. Ungerer; M.P. Ayres; M.J. Lombardero

    1999-01-01

    The southern pine beetle, Dendroctonus frontalis is among the most important agents of ecological disturbance and economic loss in forests of the southeastern United States. We combined physiological measurements of insect temperature responses with climatic analyses to test the role of temperature in determining the northern distribution limits...

  5. Revision of the LHCb limit on Majorana neutrinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuve, Brian; Peskin, Michael E.

    2016-12-16

    We revisit the recent limits from LHCb on a Majorana neutrino N in the mass range 250–5000 MeV [R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 112, 131802 (2014).]. These limits are among the best currently available, and they will be improved soon by the addition of data from Run 2 of the LHC. LHCb presented a model-independent constraint on the rate of like-sign leptonic decays, and then derived a constraint on the mixing angle V μ 4 based on a theoretical model for the B decay width to N and the N lifetime. The model used ismore » unfortunately unsound. We revise the conclusions of the paper based on a decay model similar to the one used for the τ lepton and provide formulas useful for future analyses.« less

  6. Multiresolution quantification of deciduousness in West Central African forests

    NASA Astrophysics Data System (ADS)

    Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.

    2013-04-01

    The characterization of leaf phenology in tropical forests is of major importance and improves our understanding of earth-atmosphere-climate interactions. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a dataset of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in west central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and carbon-cycle applications.

  7. Multiresolution pattern recognition of small volcanos in Magellan data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.

    1992-01-01

    The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.

  8. Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki

    2014-04-01

    We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.

  9. Transportation systems analyses: Volume 1: Executive Summary

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.

  10. Synthesis and evaluation of the service limit state of engineered fills for bridge support.

    DOT National Transportation Integrated Search

    2016-02-02

    This report synthesizes the current service limit state (SLS) design and analyses of engineered fills for bridge support used as shallow foundations. The SLS for settlement and deformations of bridge supports are summarized. Extensive literature revi...

  11. Concordance of Results from Randomized and Observational Analyses within the Same Study: A Re-Analysis of the Women’s Health Initiative Limited-Access Dataset

    PubMed Central

    Bolland, Mark J.; Grey, Andrew; Gamble, Greg D.; Reid, Ian R.

    2015-01-01

    Background Observational studies (OS) and randomized controlled trials (RCTs) often report discordant results. In the Women’s Health Initiative Calcium and Vitamin D (WHI CaD) RCT, women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements, creating a unique opportunity to compare results from randomized and observational analyses within the same study. Methods WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design- comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements- and in a observational design- comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was ≤0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population. Results In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariate-controlled observational analyses (age-adjusted, multivariate-adjusted, propensity-adjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized results, but only 1/3 other covariate-controlled observational results were concordant with randomized results. Multivariate-adjusted results from WHI OS were concordant with randomized WHI CaD results for only 4/8 endpoints. Conclusions Results of

  12. The limits of the Bohm criterion in collisional plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentini, H.-B.; Kaiser, D.

    2015-05-15

    The sheath formation within a low-pressure collisional plasma is analysed by means of a two-fluid model. The Bohm criterion takes into account the effects of the electric field and the inertia of the ions. Numerical results yield that these effects contribute to the space charge formation, only, if the collisionality is lower than a relatively small threshold. It follows that a lower and an upper limit of the drift speed of the ions exist where the effects treated by Bohm can form a sheath. This interval becomes narrower as the collisionality increases and vanishes at the mentioned threshold. Above themore » threshold, the sheath is mainly created by collisions and the ionisation. Under these conditions, the sheath formation cannot be described by means of Bohm like criteria. In a few references, a so-called upper limit of the Bohm criterion is stated for collisional plasmas where the momentum equation of the ions is taken into account, only. However, the present paper shows that this limit results in an unrealistically steep increase of the space charge density towards the wall, and, therefore, it yields no useful limit of the Bohm velocity.« less

  13. Review of meta-analyses evaluating surrogate endpoints for overall survival in oncology.

    PubMed

    Sherrill, Beth; Kaye, James A; Sandin, Rickard; Cappelleri, Joseph C; Chen, Connie

    2012-01-01

    Overall survival (OS) is the gold standard in measuring the treatment effect of new drug therapies for cancer. However, practical factors may preclude the collection of unconfounded OS data, and surrogate endpoints are often used instead. Meta-analyses have been widely used for the validation of surrogate endpoints, specifically in oncology. This research reviewed published meta-analyses on the types of surrogate measures used in oncology studies and examined the extent of correlation between surrogate endpoints and OS for different cancer types. A search was conducted in October 2010 to compile available published evidence in the English language for the validation of disease progression-related endpoints as surrogates of OS, based on meta-analyses. We summarize published meta-analyses that quantified the correlation between progression-based endpoints and OS for multiple advanced solid-tumor types. We also discuss issues that affect the interpretation of these findings. Progression-free survival is the most commonly used surrogate measure in studies of advanced solid tumors, and correlation with OS is reported for a limited number of cancer types. Given the increased use of crossover in trials and the availability of second-/third-line treatment options available to patients after progression, it will become increasingly more difficult to establish correlation between effects on progression-free survival and OS in additional tumor types.

  14. Review of meta-analyses evaluating surrogate endpoints for overall survival in oncology

    PubMed Central

    Sherrill, Beth; Kaye, James A; Sandin, Rickard; Cappelleri, Joseph C; Chen, Connie

    2012-01-01

    Overall survival (OS) is the gold standard in measuring the treatment effect of new drug therapies for cancer. However, practical factors may preclude the collection of unconfounded OS data, and surrogate endpoints are often used instead. Meta-analyses have been widely used for the validation of surrogate endpoints, specifically in oncology. This research reviewed published meta-analyses on the types of surrogate measures used in oncology studies and examined the extent of correlation between surrogate endpoints and OS for different cancer types. A search was conducted in October 2010 to compile available published evidence in the English language for the validation of disease progression-related endpoints as surrogates of OS, based on meta-analyses. We summarize published meta-analyses that quantified the correlation between progression-based endpoints and OS for multiple advanced solid-tumor types. We also discuss issues that affect the interpretation of these findings. Progression-free survival is the most commonly used surrogate measure in studies of advanced solid tumors, and correlation with OS is reported for a limited number of cancer types. Given the increased use of crossover in trials and the availability of second-/third-line treatment options available to patients after progression, it will become increasingly more difficult to establish correlation between effects on progression-free survival and OS in additional tumor types. PMID:23109809

  15. A portable analyser for the measurement of ammonium in marine waters.

    PubMed

    Amornthammarong, Natchanon; Zhang, Jia-Zhong; Ortner, Peter B; Stamates, Jack; Shoemaker, Michael; Kindel, Michael W

    2013-03-01

    A portable ammonium analyser was developed and used to measure in situ ammonium in the marine environment. The analyser incorporates an improved LED photodiode-based fluorescence detector (LPFD). This system is more sensitive and considerably smaller than previous systems and incorporates a pre-filtering subsystem enabling measurements in turbid, sediment-laden waters. Over the typical range for ammonium in marine waters (0–10 mM), the response is linear (r(2) = 0.9930) with a limit of detection (S/N ratio > 3) of 10 nM. The working range for marine waters is 0.05–10 mM. Repeatability is 0.3% (n =10) at an ammonium level of 2 mM. Results from automated operation in 15 min cycles over 16 days had good overall precision (RSD = 3%, n = 660). The system was field tested at three shallow South Florida sites. Diurnal cycles and possibly a tidal influence were expressed in the concentration variability observed.

  16. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Technical analyses. 61.13 Section 61.13 Energy NUCLEAR....13 Technical analyses. The specific technical information must also include the following analyses... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The analyses...

  17. What Limits the Encoding Effect of Note-Taking? A Meta-Analytic Examination

    ERIC Educational Resources Information Center

    Kobayashi, K.

    2005-01-01

    Previous meta-analyses indicate that the overall encoding effect of note-taking is positive but modest. This meta-analysis of 57 note-taking versus no note-taking comparison studies explored what limits the encoding effect by examining the moderating influence of seven variables: intervention, schooling level, presentation mode and length, test…

  18. Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory.

    PubMed

    Bergmann, Johanna; Genç, Erhan; Kohler, Axel; Singer, Wolf; Pearson, Joel

    2016-01-01

    Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analyses available. 94.102 Section 94.102 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.102 Analyses available. A wide array of analyses for voluntary egg product samples is available. Voluntary egg product samples include surveillance...

  20. 25 CFR 162.521 - May a lessee incorporate its WEEL analyses into its WSR lease analyses?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... LAND AND WATER LEASES AND PERMITS Wind and Solar Resource Leases Weels § 162.521 May a lessee incorporate its WEEL analyses into its WSR lease analyses? Any analyses a lessee uses to bring a WEEL activity...

  1. 25 CFR 162.521 - May a lessee incorporate its WEEL analyses into its WSR lease analyses?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... LAND AND WATER LEASES AND PERMITS Wind and Solar Resource Leases Weels § 162.521 May a lessee incorporate its WEEL analyses into its WSR lease analyses? Any analyses a lessee uses to bring a WEEL activity...

  2. Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.

    PubMed

    Prunier, J G; Colyn, M; Legendre, X; Nimon, K F; Flamand, M C

    2015-01-01

    Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses. © 2014 John Wiley & Sons Ltd.

  3. Ocean Dynamics in the Key Regions of North Atlantic-Arctic Exchanges: Evaluation of Global Multi-Resolution FESOM and CMIP-type INMCM Models with Long-Term Observations

    NASA Astrophysics Data System (ADS)

    Beszczynska-Moeller, A.; Gürses, Ö.; Sidorenko, D.; Goessling, H.; Volodin, E. M.; Gritsun, A.; Iakovlev, N. G.; Andrzejewski, J.

    2017-12-01

    Enhancing the fidelity of climate models in the Arctic and North Atlantic in order to improve Arctic predictions requires better understanding of the underlying causes of common biases. The main focus of the ERA.Net project NAtMAP (Amending North Atlantic Model Biases to Improve Arctic Predictions) is on the dynamics of the key regions connecting the Arctic and the North Atlantic climate. The study aims not only at increased model realism, but also at a deeper understanding of North Atlantic-Arctic links and their contribution to Arctic predictability. Two complementary approaches employing different global coupled climate models, ECHAM6-FESOM and INMCM4/5, were adopted. The first approach is based on a recent development of climate models with ocean components based on unstructured meshes, allowing to resolve eddies and narrow boundary currents in the most crucial regions while keeping a moderate resolution elsewhere. The multi-resolution sea ice-ocean component of ECHAM6-FESOM allows studying the benefits of very high resolution in key areas of the North Atlantic. An alternative approach to address the North Atlantic and Arctic biases is also tried by tuning the performance of the relevant sub-grid-scale parameterizations in eddy resolving version the CMIP5 climate model INMCM4. Using long-term in situ and satellite observations and available climatologies we attempt to evaluate to what extent a higher resolution, allowing the explicit representation of eddies and narrow boundary currents in the North Atlantic and Nordic Seas, can alleviate the common model errors. The effects of better resolving the Labrador Sea area on reducing the model bias in surface hydrography and improved representation of ocean currents are addressed. Resolving eddy field in the Greenland Sea is assessed in terms of reducing the deep thermocline bias. The impact of increased resolution on the modeled characteristics of Atlantic water transport into the Arctic is examined with a special

  4. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  5. Investigations of homologous disaccharides by elastic incoherent neutron scattering and wavelet multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Magazù, S.; Migliardo, F.; Vertessy, B. G.; Caccamo, M. T.

    2013-10-01

    In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å-1 ÷ 4.27 Å-1. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides.

  6. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  7. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  8. Limit states and reliability-based pipeline design. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during developmentmore » and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.« less

  9. Limits on fundamental limits to computation.

    PubMed

    Markov, Igor L

    2014-08-14

    An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.

  10. Coupling limit equilibrium analyses and real-time monitoring to refine a landslide surveillance system in Calabria (southern Italy)

    NASA Astrophysics Data System (ADS)

    Iovine, G. G. R.; Lollino, P.; Gariano, S. L.; Terranova, O. G.

    2010-11-01

    On 28 January 2009, a large debris slide was triggered by prolonged rainfalls at the southern suburbs of San Benedetto Ullano (Northern Calabria). The slope movement affected fractured and weathered migmatitic gneiss and biotitic schist, and included a pre-existing landslide. A detailed geomorphologic field survey, carried out during the whole phase of mobilization, allowed to recognize the evolution of the phenomenon. A set of datum points was located along the borders of the landslide and frequent hand-made measurements of surface displacements were performed. Since 11 February, a basic real-time monitoring system of meteoric parameters and of surface displacements, measured by means of high-precision extensometers, was also implemented. Based on the data gained through the monitoring system, and on field surveying, a basic support system for emergency management could be defined since the first phases of activation of the phenomenon. The evolution of the landslide was monitored during the following months: as a consequence, evidence of retrogressive distribution could be recognized, with initial activation in the middle sector of the slope, where new temporary springs were observed. During early May, the activity reduced to displacements of a few millimetres per month and the geo-hydrological crisis seemed to be concluded. Afterwards, the geological scheme of the slope was refined based on the data collected through a set of explorative boreholes, equipped with inclinometers and piezometers: according to the stratigraphic and inclinometric data, the depth of the mobilized body resulted in varying between 15 and 35 m along a longitudinal section. A parametric limit equilibrium analysis was carried out to explore the stability conditions of the slope affected by the landslide as well as to quantify the role of the water table in destabilizing the slope. The interpretation of the process based on field observations was confirmed by the limit equilibrium analysis

  11. Multiresolution quantification of deciduousness in West-Central African forests

    NASA Astrophysics Data System (ADS)

    Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.

    2013-11-01

    The characterization of leaf phenology in tropical forests is of major importance for forest typology as well as to improve our understanding of earth-atmosphere-climate interactions or biogeochemical cycles. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West-Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a data set of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry-season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in West-Central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and

  12. Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.

    PubMed

    Helfer, Bartosz; Prosser, Aaron; Samara, Myrto T; Geddes, John R; Cipriani, Andrea; Davis, John M; Mavridis, Dimitris; Salanti, Georgia; Leucht, Stefan

    2015-04-14

    As the number of systematic reviews is growing rapidly, we systematically investigate whether meta-analyses published in leading medical journals present an outline of available evidence by referring to previous meta-analyses and systematic reviews. We searched PubMed for recent meta-analyses of pharmacological treatments published in high impact factor journals. Previous systematic reviews and meta-analyses were identified with electronic searches of keywords and by searching reference sections. We analyzed the number of meta-analyses and systematic reviews that were cited, described and discussed in each recent meta-analysis. Moreover, we investigated publication characteristics that potentially influence the referencing practices. We identified 52 recent meta-analyses and 242 previous meta-analyses on the same topics. Of these, 66% of identified previous meta-analyses were cited, 36% described, and only 20% discussed by recent meta-analyses. The probability of citing a previous meta-analysis was positively associated with its publication in a journal with a higher impact factor (odds ratio, 1.49; 95% confidence interval, 1.06 to 2.10) and more recent publication year (odds ratio, 1.19; 95% confidence interval 1.03 to 1.37). Additionally, the probability of a previous study being described by the recent meta-analysis was inversely associated with the concordance of results (odds ratio, 0.38; 95% confidence interval, 0.17 to 0.88), and the probability of being discussed was increased for previous studies that employed meta-analytic methods (odds ratio, 32.36; 95% confidence interval, 2.00 to 522.85). Meta-analyses on pharmacological treatments do not consistently refer to and discuss findings of previous meta-analyses on the same topic. Such neglect can lead to research waste and be confusing for readers. Journals should make the discussion of related meta-analyses mandatory.

  13. Analyses from Near (Meteorites) and Far (Spacecraft): Complementary Approaches to Planetary Geochemistry

    NASA Astrophysics Data System (ADS)

    McSween, H. Y.

    2013-12-01

    Spacecraft missions have transformed planets from astronomical objects into geologic worlds, but geochemical remote sensing has limits. Considerably greater geologic insights are possible for a few bodies to which we can confidently assign meteorite samples. Mars and asteroid 4 Vesta demonstrate the advances provided by coupling spacecraft remote sensing data and laboratory analyses of meteorites. Martian meteorites sample at least 7 as-yet unidentified sites but are strongly biased towards young crystallization ages compared to Martian surface ages. Geochemical comparison with generally older rocks analyzed by Mars rovers APXS reveals evolutionary differences [1] that might be explained by water or redox state. Trace elements and radiogenic isotopes, readily measured in Martian meteorites but not yet possible by remote sensing, constrain the planet's volatile inventory, the chronology of magmatism, and the compositions of mantle source regions and the bulk planet [2]. The origin and geochemical cycling of water that orbiters indicate once sculpted Mars' geomorphology and now resides in the Martian subsurface is revealed by measurements of stable isotopes and of apatite OH in meteorites. Although sedimentary rocks are nearly absent from the Martian meteorite collection, determining the processes that produced the regolith and the nature and source of organic matter on Mars are facilitated by comparing rover analyses of soils with meteorite data. In a similar way, analyses of Vesta by the Dawn orbiting spacecraft [3] are leveraged by laboratory analyses of the howardite, eucrite, diogenite (HED) meteorites [4]. Visible/near-infrared spectra of HEDs provide the calibration necessary for lithologic mapping of Vesta's surface, revealing an ancient eucrite crust, diogenite excavated from a huge crater, and a pervasive regolith of howardite. Gamma-ray and neutron data from Vesta are similarly interpreted by comparison with meteorite elemental abundances. The unexpected

  14. In-depth analyses of paleolithic pigments in cave climatic conditions

    NASA Astrophysics Data System (ADS)

    Touron, Stéphanie; Trichereau, Barbara; Syvilay, Delphine

    2017-07-01

    Painted caves are a specific environment which preservation needs multidisciplinary studies carried out within the different actors. The actions set-up must follow national and European ethics and treaties and be as less invasive as possible to preserve the integrity of the site. Studying colorants in caves should meet these expectations and take into account on-field conditions: high humidity rate, reduced access to electricity, etc. Therefore, non-invasive analyses should be preferred. However, their limits restrict the field of application and sometimes sampling and laboratory analyses must be used to answer the problematic. It is especially true when the pigment is covered by calcite. For this purpose, the Laser-Induced Breakdown Spectroscopy (LIBS) has been assessed to identify the composition with stratigraphic analyses. This study carries out in-depth profile on laboratory samples in conditions close to the ones meet in caves. Samples were prepared on a calcareous substrate using three pigments: red ochre, manganese black and carbon black and two binding media: water and saliva. All samples have been covered by calcite. Four sets of measurements have then been done using the LIBS instrument. The in-depth profiles were obtained using the Standard Normal Variate (SNV) normalization. For all the samples, the pigment layer was identified in the second or third shot, the calcite layer being quite thin. However, the results remain promising with the carbon black pigment but not really conclusive, the carbon being generally quite difficult to quantify.

  15. Multilevel Sequential2 Monte Carlo for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth

    2018-09-01

    The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.

  16. Does activity limitation predict discharge destination for postacute care patients?

    PubMed

    Chang, Feng-Hang; Ni, Pengsheng; Jette, Alan M

    2014-09-01

    This study aimed to examine the ability of different domains of activity limitation to predict discharge destination (home vs. nonhome settings) 1 mo after hospital discharge for postacute rehabilitation patients. A secondary analysis was conducted using a data set of 518 adults with neurologic, lower extremity orthopedic, and complex medical conditions followed after discharge from a hospital into postacute care. Variables collected at baseline include activity limitations (basic mobility, daily activity, and applied cognitive function, measured by the Activity Measure for Post-Acute Care), demographics, diagnosis, and cognitive status. The discharge destination was recorded at 1 mo after being discharged from the hospital. Correlational analyses revealed that the 1-mo discharge destination was correlated with two domains of activity (basic mobility and daily activity) and cognitive status. However, multiple logistic regression and receiver operating characteristic curve analyses showed that basic mobility functioning performed the best in discriminating home vs. nonhome living. This study supported the evidence that basic mobility functioning is a critical determinant of discharge home for postacute rehabilitation patients. The Activity Measure for Post-Acute Care-basic mobility showed good usability in discriminating home vs. nonhome living. The findings shed light on the importance of basic mobility functioning in the discharge planning process.

  17. More than just records: analysing natural history collections for biodiversity planning.

    PubMed

    Ward, Darren F

    2012-01-01

    Natural History Collections (NHCs) play a central role as sources of data for biodiversity and conservation. Yet, few NHCs have examined whether the data they contain is adequately representative of local biodiversity. I examined over 15,000 databased records of Hymenoptera from 1435 locations across New Zealand collected over the past 90 years. These records are assessed in terms of their geographical, temporal, and environmental coverage across New Zealand. Results showed that the spatial coverage of records was significantly biased, with the top four areas contributing over 51% of all records. Temporal biases were also evident, with a large proportion (40%) of records collected within a short time period. The lack of repeat visits to specific locations indicated that the current set of NHC records would be of limited use for long-term ecological research. Consequently, analyses and interpretation of historical data, for example, shifts in community composition, would be limited. However, in general, NHC records provided good coverage of the diversity of New Zealand habitats and climatic environments, although fewer NHC records were represented at cooler temperatures (<5°C) and the highest rainfalls (>5000 mm/yr). Analyses of NHCs can be greatly enhanced by using simple techniques that examine collection records in terms of environmental and geographical space. NHCs that initiate a systematic sampling strategy will provide higher quality data for biodiversity research than ad hoc or point samples, as is currently the norm. Although NHCs provide a rich source of information they could be far better utilised in a range of large-scale ecological and conservation studies.

  18. More Than Just Records: Analysing Natural History Collections for Biodiversity Planning

    PubMed Central

    Ward, Darren F.

    2012-01-01

    Natural History Collections (NHCs) play a central role as sources of data for biodiversity and conservation. Yet, few NHCs have examined whether the data they contain is adequately representative of local biodiversity. I examined over 15,000 databased records of Hymenoptera from 1435 locations across New Zealand collected over the past 90 years. These records are assessed in terms of their geographical, temporal, and environmental coverage across New Zealand. Results showed that the spatial coverage of records was significantly biased, with the top four areas contributing over 51% of all records. Temporal biases were also evident, with a large proportion (40%) of records collected within a short time period. The lack of repeat visits to specific locations indicated that the current set of NHC records would be of limited use for long-term ecological research. Consequently, analyses and interpretation of historical data, for example, shifts in community composition, would be limited. However, in general, NHC records provided good coverage of the diversity of New Zealand habitats and climatic environments, although fewer NHC records were represented at cooler temperatures (<5°C) and the highest rainfalls (>5000 mm/yr). Analyses of NHCs can be greatly enhanced by using simple techniques that examine collection records in terms of environmental and geographical space. NHCs that initiate a systematic sampling strategy will provide higher quality data for biodiversity research than ad hoc or point samples, as is currently the norm. Although NHCs provide a rich source of information they could be far better utilised in a range of large-scale ecological and conservation studies. PMID:23185605

  19. Comparative genomic analyses of nickel, cobalt and vitamin B12 utilization

    PubMed Central

    Zhang, Yan; Rodionov, Dmitry A; Gelfand, Mikhail S; Gladyshev, Vadim N

    2009-01-01

    Background Nickel (Ni) and cobalt (Co) are trace elements required for a variety of biological processes. Ni is directly coordinated by proteins, whereas Co is mainly used as a component of vitamin B12. Although a number of Ni and Co-dependent enzymes have been characterized, systematic evolutionary analyses of utilization of these metals are limited. Results We carried out comparative genomic analyses to examine occurrence and evolutionary dynamics of the use of Ni and Co at the level of (i) transport systems, and (ii) metalloproteomes. Our data show that both metals are widely used in bacteria and archaea. Cbi/NikMNQO is the most common prokaryotic Ni/Co transporter, while Ni-dependent urease and Ni-Fe hydrogenase, and B12-dependent methionine synthase (MetH), ribonucleotide reductase and methylmalonyl-CoA mutase are the most widespread metalloproteins for Ni and Co, respectively. Occurrence of other metalloenzymes showed a mosaic distribution and a new B12-dependent protein family was predicted. Deltaproteobacteria and Methanosarcina generally have larger Ni- and Co-dependent proteomes. On the other hand, utilization of these two metals is limited in eukaryotes, and very few of these organisms utilize both of them. The Ni-utilizing eukaryotes are mostly fungi (except saccharomycotina) and plants, whereas most B12-utilizing organisms are animals. The NiCoT transporter family is the most widespread eukaryotic Ni transporter, and eukaryotic urease and MetH are the most common Ni- and B12-dependent enzymes, respectively. Finally, investigation of environmental and other conditions and identity of organisms that show dependence on Ni or Co revealed that host-associated organisms (particularly obligate intracellular parasites and endosymbionts) have a tendency for loss of Ni/Co utilization. Conclusion Our data provide information on the evolutionary dynamics of Ni and Co utilization and highlight widespread use of these metals in the three domains of life, yet only a

  20. Cheating Literacy: The Limitations of Simulated Classroom Discourse in Educational Software for Children

    ERIC Educational Resources Information Center

    Walton, Marion

    2007-01-01

    This paper presents a multimodal discourse analysis of children using "drill-and-practice" literacy software at a primary school in the Western Cape, South Africa. The children's interactions with the software are analysed. The software has serious limitations which arise from the global political economy of the educational software…

  1. Do recommended driving limits affect teen-reported traffic violations and crashes during the first 12 months of independent driving?

    PubMed

    Simons-Morton, Bruce; Hartos, Jessica L; Leaf, William A; Preusser, David F

    2006-09-01

    Motor vehicle crashes are highly elevated among newly licensed teenage drivers. Limits on high-risk driving conditions by driver licensing policies and parents can protect novice teens from negative driving outcomes, while they experience and driving proficiency. The purpose of this research was to evaluate the effects of strict parent-imposed driving limits on driving outcomes during the first year of licensure. A sample of 3,743 Connecticut teens was recruited and randomized to the Checkpoints Program or comparison condition. Assessments conducted at baseline, licensure, 3-, 6-, and 12-months postlicensure included parent-imposed driving limits, traffic violations, and crashes. Bivariate and multivariate analyses were conducted to assess the effects of strict parent limits on traffic violations and crashes during the first year of licensure. Thirty percent of teens reported at least one traffic violation and 40% reported at least one crash. More strict parent-imposed limits at licensure, 3-, 6-, and 12-months postlicensure, were associated with fewer violations and crashes in multivariate analyses. Notably, adherence to recommended night curfew was consistently associated with fewer violations and crashes. The findings indicate that strict parent-imposed limits may protect novice teen drivers from negative driving outcomes.

  2. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    PubMed

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  3. SIMS analyses of minor and trace element distributions in fracture calcite from Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Denniston, Rhawn F.; Shearer, Charles K.; Layne, Graham D.; Vaniman, David T.

    1997-05-01

    Fracture-lining calcite samples from Yucca Mountain, Nevada, obtained as part of the extensive vertical sampling in studies of this site as a potential high-level waste repository, have been characterized according to microbeam-scale (25-30 μm) trace and minor element chemistry, and cathodoluminescent zonation patterns. As bulk chemical analyses are limited in spatial resolution and are subject to contamination by intergrown phases, a technique for analysis by secondary ion mass spectrometry (SIMS) of minor (Mn, Fe, Sr) and trace (REE) elements in calcite was developed and applied to eighteen calcite samples from four boreholes and one trench. SIMS analyses of REE in calcite and dolomite have been shown to be quantitative to abundances < 1 × chondrite. Although the low secondary ion yields associated with carbonates forced higher counting times than is necessary in most silicates, Mn, Fe, Sr, and REE analyses were obtained with sub-ppm detection limits and 2-15% analytical precision. Bulk chemical signatures noted by Vaniman (1994) allowed correlation of minor and trace element signatures in Yucca Mountain calcite with location of calcite precipitation (saturated vs. unsaturated zone). For example, upper unsaturated zone calcite exhibits pronounced negative Ce and Eu anomalies not observed in calcite collected below in the deep unsaturated zone. These chemical distinctions served as fingerprints which were applied to growth zones in order to examine temporal changes in calcite crystallization histories; analyses of such fine-scale zonal variations are unattainable using bulk analytical techniques. In addition, LREE (particularly Ce) scavenging of calcite-precipitating solutions by manganese oxide phases is discussed as the mechanism for Ce-depletion in unsaturated zone calcite.

  4. Material limitations on the detection limit in refractometry.

    PubMed

    Skafte-Pedersen, Peder; Nunes, Pedro S; Xiao, Sanshui; Mortensen, Niels Asger

    2009-01-01

    We discuss the detection limit for refractometric sensors relying on high-Q optical cavities and show that the ultimate classical detection limit is given by min {Δn} ≳ η, with n + iη being the complex refractive index of the material under refractometric investigation. Taking finite Q factors and filling fractions into account, the detection limit declines. As an example we discuss the fundamental limits of silicon-based high-Q resonators, such as photonic crystal resonators, for sensing in a bio-liquid environment, such as a water buffer. In the transparency window (λ ≳ 1100 nm) of silicon the detection limit becomes almost independent on the filling fraction, while in the visible, the detection limit depends strongly on the filling fraction because the silicon absorbs strongly.

  5. Rapid production of optimal-quality reduced-resolution representations of very large databases

    DOEpatents

    Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.

    2001-01-01

    View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.

  6. Are We Reaching the Limits of Homo sapiens?

    PubMed Central

    Marck, Adrien; Antero, Juliana; Berthelot, Geoffroy; Saulière, Guillaume; Jancovici, Jean-Marc; Masson-Delmotte, Valérie; Boeuf, Gilles; Spedding, Michael; Le Bourg, Éric; Toussaint, Jean-François

    2017-01-01

    Echoing scientific and industrial progress, the Twentieth century was an unprecedented period of improvement for human capabilities and performances, with a significant increase in lifespan, adult height, and maximal physiological performance. Analyses of historical data show a major slow down occurring in the most recent years. This triggered large and passionate debates in the academic scene within multiple disciplines; as such an observation could be interpreted as our upper biological limits. Such a new phase of human history may be related to structural and functional limits determined by long term evolutionary constraints, and the interaction between complex systems and their environment. In this interdisciplinary approach, we call into question the validity of subsequent forecasts and projections through innovative and related biomarkers such as sport, lifespan, and height indicators. We set a theoretical framework based on biological and environmental relevance rather than using a typical single-variable forecasting approach. As demonstrated within the article, these new views will have major social, economical, and political implications. PMID:29123486

  7. Are We Reaching the Limits of Homo sapiens?

    PubMed

    Marck, Adrien; Antero, Juliana; Berthelot, Geoffroy; Saulière, Guillaume; Jancovici, Jean-Marc; Masson-Delmotte, Valérie; Boeuf, Gilles; Spedding, Michael; Le Bourg, Éric; Toussaint, Jean-François

    2017-01-01

    Echoing scientific and industrial progress, the Twentieth century was an unprecedented period of improvement for human capabilities and performances, with a significant increase in lifespan, adult height, and maximal physiological performance. Analyses of historical data show a major slow down occurring in the most recent years. This triggered large and passionate debates in the academic scene within multiple disciplines; as such an observation could be interpreted as our upper biological limits. Such a new phase of human history may be related to structural and functional limits determined by long term evolutionary constraints, and the interaction between complex systems and their environment. In this interdisciplinary approach, we call into question the validity of subsequent forecasts and projections through innovative and related biomarkers such as sport, lifespan, and height indicators. We set a theoretical framework based on biological and environmental relevance rather than using a typical single-variable forecasting approach. As demonstrated within the article, these new views will have major social, economical, and political implications.

  8. Benefits and applications of interdisciplinary digital tools for environmental meta-reviews and analyses

    NASA Astrophysics Data System (ADS)

    Grubert, Emily; Siders, Anne

    2016-09-01

    Digitally-aided reviews of large bodies of text-based information, such as academic literature, are growing in capability but are not yet common in environmental fields. Environmental sciences and studies can benefit from application of digital tools to create comprehensive, replicable, interdisciplinary reviews that provide rapid, up-to-date, and policy-relevant reports of existing work. This work reviews the potential for applications of computational text mining and analysis tools originating in the humanities to environmental science and policy questions. Two process-oriented case studies of digitally-aided environmental literature reviews and meta-analyses illustrate potential benefits and limitations. A medium-sized, medium-resolution review (∼8000 journal abstracts and titles) focuses on topic modeling as a rapid way to identify thematic changes over time. A small, high-resolution review (∼300 full text journal articles) combines collocation and network analysis with manual coding to synthesize and question empirical field work. We note that even small digitally-aided analyses are close to the upper limit of what can be done manually. Established computational methods developed in humanities disciplines and refined by humanities and social science scholars to interrogate large bodies of textual data are applicable and useful in environmental sciences but have not yet been widely applied. Two case studies provide evidence that digital tools can enhance insight. Two major conclusions emerge. First, digital tools enable scholars to engage large literatures rapidly and, in some cases, more comprehensively than is possible manually. Digital tools can confirm manually identified patterns or identify additional patterns visible only at a large scale. Second, digital tools allow for more replicable and transparent conclusions to be drawn from literature reviews and meta-analyses. The methodological subfields of digital humanities and computational social

  9. Physiological and Proteomic Analysis of Escherichia coli Iron-Limited Chemostat Growth

    PubMed Central

    Folsom, James Patrick; Parker, Albert E.

    2014-01-01

    Iron bioavailability is a major limiter of bacterial growth in mammalian host tissue and thus represents an important area of study. Escherichia coli K-12 metabolism was studied at four levels of iron limitation in chemostats using physiological and proteomic analyses. The data documented an E. coli acclimation gradient where progressively more severe iron scarcity resulted in a larger percentage of substrate carbon being directed into an overflow metabolism accompanied by a decrease in biomass yield on glucose. Acetate was the primary secreted organic by-product for moderate levels of iron limitation, but as stress increased, the metabolism shifted to secrete primarily lactate (∼70% of catabolized glucose carbon). Proteomic analysis reinforced the physiological data and quantified relative increases in glycolysis enzyme abundance and decreases in tricarboxylic acid (TCA) cycle enzyme abundance with increasing iron limitation stress. The combined data indicated that E. coli responds to limiting iron by investing the scarce resource in essential enzymes, at the cost of catabolic efficiency (i.e., downregulating high-ATP-yielding pathways containing enzymes with large iron requirements, like the TCA cycle). Acclimation to iron-limited growth was contrasted experimentally with acclimation to glucose-limited growth to identify both general and nutrient-specific acclimation strategies. While the iron-limited cultures maximized biomass yields on iron and increased expression of iron acquisition strategies, the glucose-limited cultures maximized biomass yields on glucose and increased expression of carbon acquisition strategies. This study quantified ecologically competitive acclimations to nutrient limitations, yielding knowledge essential for understanding medically relevant bacterial responses to host and to developing intervention strategies. PMID:24837288

  10. Material Limitations on the Detection Limit in Refractometry

    PubMed Central

    Skafte-Pedersen, Peder; Nunes, Pedro S.; Xiao, Sanshui; Mortensen, Niels Asger

    2009-01-01

    We discuss the detection limit for refractometric sensors relying on high-Q optical cavities and show that the ultimate classical detection limit is given by min {Δn} ≳ η, with n + iη being the complex refractive index of the material under refractometric investigation. Taking finite Q factors and filling fractions into account, the detection limit declines. As an example we discuss the fundamental limits of silicon-based high-Q resonators, such as photonic crystal resonators, for sensing in a bio-liquid environment, such as a water buffer. In the transparency window (λ ≳ 1100 nm) of silicon the detection limit becomes almost independent on the filling fraction, while in the visible, the detection limit depends strongly on the filling fraction because the silicon absorbs strongly. PMID:22291513

  11. Limited Amount of Formula May Facilitate Breastfeeding: Randomized, Controlled Trial to Compare Standard Clinical Practice versus Limited Supplemental Feeding

    PubMed Central

    Straňák, Zbyněk; Feyereislova, Simona; Černá, Marcela; Kollárová, Jana; Feyereisl, Jaroslav

    2016-01-01

    Objectives Breastfeeding is known to reduce infant morbidity and improve well-being. Nevertheless, breastfeeding rates remain low despite public health efforts. Our study aims to investigate the effect of controlled limited formula usage during birth hospitalisation on breastfeeding, using the primary hypothesis that early limited formula feeds in infants with early weight loss will not adversely affect the rate of exclusive or any breastfeeding as measured at discharge, 3 and 6 months of age. Material and Methods We randomly assigned 104 healthy term infants, 24 to 48 hours old, with ≥ 5% loss of birth weight to controlled limited formula (CLF) intervention (10 ml formula by syringe after each breastfeeding, discontinued at onset of lactation) or control group (standard approach, SA). Groups were compared for demographic data and breastfeeding rates at discharge, 3 months and 6 months of age (p-values adjusted for multiple testing). Results Fifty newborns were analysed in CLF and 50 in SA group. There were no differences in demographic data or clinical characteristics between groups. We found no evidence of difference between treatment groups in the rates of exclusive as well as any breastfeeding at discharge (p-value 0.2 and >0.99 respectively), 3 months (p-value 0.12 and 0.10) and 6 months of infants’ age (p-value 0.45 and 0.34 respectively). The percentage weight loss during hospitalisation was significantly higher in the SA group (7.3% in CLF group, 8.4% in SA group, p = 0.002). Conclusion The study shows that controlled limited formula use does not have an adverse effect on rates of breastfeeding in the short and long term. Larger studies are needed to confirm a possible potential in controlled limited formula use to support establishing breastfeeding and to help to improve the rates of breastfeeding overall. Trial Registration ISRCTN registry ISRCTN61915183 PMID:26918700

  12. Limited Amount of Formula May Facilitate Breastfeeding: Randomized, Controlled Trial to Compare Standard Clinical Practice versus Limited Supplemental Feeding.

    PubMed

    Straňák, Zbyněk; Feyereislova, Simona; Černá, Marcela; Kollárová, Jana; Feyereisl, Jaroslav

    2016-01-01

    Breastfeeding is known to reduce infant morbidity and improve well-being. Nevertheless, breastfeeding rates remain low despite public health efforts. Our study aims to investigate the effect of controlled limited formula usage during birth hospitalisation on breastfeeding, using the primary hypothesis that early limited formula feeds in infants with early weight loss will not adversely affect the rate of exclusive or any breastfeeding as measured at discharge, 3 and 6 months of age. We randomly assigned 104 healthy term infants, 24 to 48 hours old, with ≥ 5% loss of birth weight to controlled limited formula (CLF) intervention (10 ml formula by syringe after each breastfeeding, discontinued at onset of lactation) or control group (standard approach, SA). Groups were compared for demographic data and breastfeeding rates at discharge, 3 months and 6 months of age (p-values adjusted for multiple testing). Fifty newborns were analysed in CLF and 50 in SA group. There were no differences in demographic data or clinical characteristics between groups. We found no evidence of difference between treatment groups in the rates of exclusive as well as any breastfeeding at discharge (p-value 0.2 and >0.99 respectively), 3 months (p-value 0.12 and 0.10) and 6 months of infants' age (p-value 0.45 and 0.34 respectively). The percentage weight loss during hospitalisation was significantly higher in the SA group (7.3% in CLF group, 8.4% in SA group, p = 0.002). The study shows that controlled limited formula use does not have an adverse effect on rates of breastfeeding in the short and long term. Larger studies are needed to confirm a possible potential in controlled limited formula use to support establishing breastfeeding and to help to improve the rates of breastfeeding overall. ISRCTN registry ISRCTN61915183.

  13. Chemical analyses in the World Coal Quality Inventory

    USGS Publications Warehouse

    Tewalt, Susan J.; Belkin, Harvey E.; SanFilipo, John R.; Merrill, Matthew D.; Palmer, Curtis A.; Warwick, Peter D.; Karlsen, Alexander W.; Finkelman, Robert B.; Park, Andy J.

    2010-01-01

    The main objective of the World Coal Quality Inventory (WoCQI) was to collect and analyze a global set of samples of mined coal during a time period from about 1995 to 2006 (Finkelman and Lovern, 2001). Coal samples were collected by foreign collaborators and submitted to country specialists in the U.S. Geological Survey (USGS) Energy Program. However, samples from certain countries, such as Afghanistan, India, and Kyrgyzstan, were collected collaboratively in the field with USGS personnel. Samples were subsequently analyzed at two laboratories: the USGS Inorganic Geochemistry Laboratory located in Denver, CO and a commercial laboratory (Geochemical Testing, Inc.) located in Somerset, PA. Thus the dataset, which is in Excel (2003) format and includes 1,580 samples from 57 countries, does not have the inter-laboratory variability that is present in many compilations. Major-, minor-, and trace-element analyses from the USGS laboratory, calculated to a consistent analytical basis (dry, whole-coal) and presented with available sample identification information, are sorted alphabetically by country name. About 70 percent of the samples also have data from the commercial laboratory, which are presented on an as-received analytical basis. The USGS initiated a laboratory review of quality assurance in 2008, covering quality control and methodology used in inorganic chemical analyses of coal, coal power plant ash, water, and sediment samples. This quality control review found that data generated by the USGS Inorganic Geochemistry Laboratory from 1996 through 2006 were characterized by quality practices that did not meet USGS requirements commonly in use at the time. The most serious shortcomings were (1) the adjustment of raw sample data to standards when the instrument values for those standards exceeded acceptable limits or (2) the insufficient use of multiple standards to provide adequate quality assurance. In general, adjustment of raw data to account for instrument

  14. Cognitive Limitations at Work Among Employed Breast Cancer Survivors in China.

    PubMed

    Zeng, Yingchun; Cheng, Andy S K; Feuerstein, Michael

    This study aimed to determine whether levels of distress (anxiety and depression) and cognitive symptoms at work are related to work productivity and quality of life (QOL) in Chinese breast cancer survivors (BCS), compared to a group of Chinese women without cancer but with different musculoskeletal pain related to work. This study used a cross-sectional study design. Working BCS were recruited in a tumor hospital's outpatient department, and women with no history of cancer (noncancer comparison [NCC] group) were recruited from a rehabilitation center. A total of 412 participants were included. Multiple regression analyses indicated that higher anxiety was associated with work limitations (B = .005, p = .014) and QOL (B = 2.417, p = .004) in the BCS group only. Cognitive limitations at work were associated with work limitations (B = .002, p = .001) and QOL (B = 1.022, p = .003) in the BCS group only. Depressive symptoms (B = .028, p = .017) were significantly associated with work limitations in the NCC group. Breast cancer survivors reported higher levels of cognitive limitations at work and anxiety, lower levels of work productivity, and QOL. When remaining at work is a viable option for the cancer survivor with cognitive limitations at work, the rehabilitation nurse should consider approaches to best accommodate the specific cognitive limitations and work tasks, as well as help the patient manage associated anxiety when present.

  15. Do current cost-effectiveness analyses reflect the full value of childhood vaccination in Europe?

    PubMed Central

    Brüggenjürgen, Bernd; Lorrot, Mathie; Sheppard, Fiona R; Rémy, Vanessa

    2014-01-01

    Economic evaluation of vaccination programs can be challenging and does not always fully capture the benefits provided. Reasons for this include the difficulties incurred in accurately capturing the health and economic impact of infectious diseases and how different diseases may interact with each other. Rotavirus infection, for example, peaks at a similar time than other infectious diseases, such as RSV and influenza, which can cause hospital overcrowding and disruption, and may pose a risk to more vulnerable children due to limited availability of isolation facilities. Another challenge, specific to evaluating childhood vaccination, is that QoL cannot be accurately measured in children due to a lack of validated instruments. Childhood diseases also incur a care giver burden, due to the need for parents to take time off work, and this is important to consider. Finally, for diseases such as RVGE, cost-effectiveness analyses in which longer time horizons are considered may not reflect the short-term benefits of vaccination. Further quantification of the economic impact of childhood diseases is thus required to fully highlight the true benefits of childhood vaccination that may be realized. Herein we explore the limitations of existing economic evaluations for childhood vaccination, and how economic analyses could be better adapted in future. PMID:25424934

  16. EU-FP7-iMARS: Analysis of Mars Multi-Resolution Images Using Auto-Coregistration Data Mining and Crowd Source Techniques: Processed Results - a First Look

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven

    2016-06-01

    Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for

  17. Acrylamide levels in Finnish foodstuffs analysed with liquid chromatography tandem mass spectrometry.

    PubMed

    Eerola, Susanna; Hollebekkers, Koen; Hallikainen, Anja; Peltonen, Kimmo

    2007-02-01

    Sample clean-up and HPLC with tandem mass spectrometric detection (LC-MS/MS) was validated for the routine analysis of acrylamide in various foodstuffs. The method used proved to be reliable and the detection limit for routine monitoring was sensitive enough for foods and drinks (38 microg/kg for foods and 5 microg/L for drinks). The RSDs for repeatability and day-to-day variation were below 15% in all food matrices. Two hundred and one samples which included more than 30 different types of food and foods manufactured and prepared in various ways were analysed. The main types of food analysed were potato and cereal-based foods, processed foods (pizza, minced beef meat, meat balls, chicken nuggets, potato-ham casserole and fried bacon) and coffee. Acrylamide was detected at levels, ranging from nondetectable to 1480 microg/kg level in solid food, with crisp bread exhibiting the highest levels. In drinks, the highest value (29 microg/L) was found in regular coffee drinks.

  18. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  19. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laes, Denise; Eisinger, Chris; Morgan, Craig

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and suchmore » can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.« less

  20. Coordinated In Situ Nanosims Analyses of H-C-O Isotopes in ALH 84001 Carbonates

    NASA Technical Reports Server (NTRS)

    Usui, T.; Alexander, C. M. O'D.; Wang, J.; Simon, J. I.; Jones, J. H.

    2016-01-01

    The surface geology and geomorphology of Mars indicate that it was once warm enough to maintain a large body of liquid water on its surface, though such a warm environment might have been transient. This study reports the hydrogen, carbon, and oxygen isotope compositions of the ancient atmosphere/hydrosphere of Mars based on in situ ion microprobe analyses of approximately 4 Ga-old carbonates in Allan Hills (ALH) 84001. The ALH 84001 carbonates are the most promising targets because they are thought to have formed from fluid that was closely associated with the Noachian atmosphere. While there are a number of carbon and oxygen isotope studies of the ALH 84001 carbonates, in situ hydrogen isotope analyses of these carbonates are limited and were reported more than a decade ago. Well-documented coordinated in situ analyses of carbon, oxygen and hydrogen isotopes provide an internally consistent dataset that can be used to constrain the nature of the Noachian atmosphere/hydrosphere and may eventually shed light on the hypothesis of ancient watery Mars.

  1. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation

  2. Multi-resolution Land Characteristics Consortium ...

    EPA Pesticide Factsheets

    ... ui com 'D-' I OCX i) ai ae > • i — ci bio i ..... KM ki o.^-«-minm t-uiwoj • in «ii k _._, o LIU ll»- 4^UI ii a: OC k • k • O « -iff Ok i- OO • cc k •• o* tl II •o ii k k ...

  3. Proteomic analyses of host and pathogen responses during bovine mastitis.

    PubMed

    Boehmer, Jamie L

    2011-12-01

    The pursuit of biomarkers for use as clinical screening tools, measures for early detection, disease monitoring, and as a means for assessing therapeutic responses has steadily evolved in human and veterinary medicine over the past two decades. Concurrently, advances in mass spectrometry have markedly expanded proteomic capabilities for biomarker discovery. While initial mass spectrometric biomarker discovery endeavors focused primarily on the detection of modulated proteins in human tissues and fluids, recent efforts have shifted to include proteomic analyses of biological samples from food animal species. Mastitis continues to garner attention in veterinary research due mainly to affiliated financial losses and food safety concerns over antimicrobial use, but also because there are only a limited number of efficacious mastitis treatment options. Accordingly, comparative proteomic analyses of bovine milk have emerged in recent years. Efforts to prevent agricultural-related food-borne illness have likewise fueled an interest in the proteomic evaluation of several prominent strains of bacteria, including common mastitis pathogens. The interest in establishing biomarkers of the host and pathogen responses during bovine mastitis stems largely from the need to better characterize mechanisms of the disease, to identify reliable biomarkers for use as measures of early detection and drug efficacy, and to uncover potentially novel targets for the development of alternative therapeutics. The following review focuses primarily on comparative proteomic analyses conducted on healthy versus mastitic bovine milk. However, a comparison of the host defense proteome of human and bovine milk and the proteomic analysis of common veterinary pathogens are likewise introduced.

  4. ADDITIONAL STRESS AND FRACTURE MECHANICS ANALYSES OF PRESSURIZED WATER REACTOR PRESSURE VESSEL NOZZLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Matthew; Yin, Shengjun; Stevens, Gary

    2012-01-01

    In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperaturemore » (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP

  5. Currie detection limits in gamma-ray spectroscopy.

    PubMed

    De Geer, Lars-Erik

    2004-01-01

    Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. SHORT TECHNICAL NOTE SUMMARY: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed.

  6. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This

  7. Item Analyses of Memory Differences

    PubMed Central

    Salthouse, Timothy A.

    2017-01-01

    Objective Although performance on memory and other cognitive tests is usually assessed with a score aggregated across multiple items, potentially valuable information is also available at the level of individual items. Method The current study illustrates how analyses of variance with item as one of the factors, and memorability analyses in which item accuracy in one group is plotted as a function of item accuracy in another group, can provide a more detailed characterization of the nature of group differences in memory. Data are reported for two memory tasks, word recall and story memory, across age, ability, repetition, delay, and longitudinal contrasts. Results The item-level analyses revealed evidence for largely uniform differences across items in the age, ability, and longitudinal contrasts, but differential patterns across items in the repetition contrast, and unsystematic item relations in the delay contrast. Conclusion Analyses at the level of individual items have the potential to indicate the manner by which group differences in the aggregate test score are achieved. PMID:27618285

  8. Safety effects of reducing the speed limit from 90km/h to 70km/h.

    PubMed

    De Pauw, Ellen; Daniels, Stijn; Thierie, Melissa; Brijs, Tom

    2014-01-01

    Speed is one of the main risk factors in traffic safety, as it increases both the chances and the severity of a crash. In order to achieve improved traffic safety by influencing the speed of travel, road authorities may decide to lower the legally imposed speed limits. In 2001 the Flemish government decided to lower speed limits from 90km/h to 70km/h on a considerable number of highways. The present study examines the effectiveness of this measure using a comparison group before- and after study to account for general trend effects in road safety. Sixty-one road sections with a total length of 116km were included. The speed limits for those locations were restricted in 2001 and 2002. The comparison group consisted of 19 road sections with a total length of 53km and an unchanged speed limit of 90km/h throughout the research period. Taking trend into account, the analyses showed a 5% decrease [0.88; 1.03] in the crash rates after the speed limit restriction. A greater effect was identified in the case of crashes involving serious injuries and fatalities, which showed a decrease of 33% [0.57; 0.79]. Separate analyses between crashes at intersections and at road sections showed a higher effectiveness at road sections. It can be concluded from this study that speed limit restrictions do have a favorable effect on traffic safety, especially on severe crashes. Future research should examine the cause for the difference in the effect between road sections and intersections that was identified, taking vehicle speeds into account. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  10. Bifurcation and extinction limit of stretched premixed flames with chain-branching intermediate kinetics and radiative loss

    NASA Astrophysics Data System (ADS)

    Zhang, Huangwei; Chen, Zheng

    2018-05-01

    Premixed counterflow flames with thermally sensitive intermediate kinetics and radiation heat loss are analysed within the framework of large activation energy. Unlike previous studies considering one-step global reaction, two-step chemistry consisting of a chain branching reaction and a recombination reaction is considered here. The correlation between the flame front location and stretch rate is derived. Based on this correlation, the extinction limit and bifurcation characteristics of the strained premixed flame are studied, and the effects of fuel and radical Lewis numbers as well as radiation heat loss are examined. Different flame regimes and their extinction characteristics can be predicted by the present theory. It is found that fuel Lewis number affects the flame bifurcation qualitatively and quantitatively, whereas radical Lewis number only has a quantitative influence. Stretch rates at the stretch and radiation extinction limits respectively decrease and increase with fuel Lewis number before the flammability limit is reached, while the radical Lewis number shows the opposite tendency. In addition, the relation between the standard flammability limit and the limit derived from the strained near stagnation flame is affected by the fuel Lewis number, but not by the radical Lewis number. Meanwhile, the flammability limit increases with decreased fuel Lewis number, but with increased radical Lewis number. Radical behaviours at flame front corresponding to flame bifurcation and extinction are also analysed in this work. It is shown that radical concentration at the flame front, under extinction stretch rate condition, increases with radical Lewis number but decreases with fuel Lewis number. It decreases with increased radiation loss.

  11. Comparison of an infrared anaesthetic agent analyser (Datex-Ohmeda) with refractometry for measurement of isoflurane, sevoflurane and desflurane concentrations.

    PubMed

    Rudolff, Andrea S; Moens, Yves P S; Driessen, Bernd; Ambrisko, Tamas D

    2014-07-01

    To assess agreement between infrared (IR) analysers and a refractometer for measurements of isoflurane, sevoflurane and desflurane concentrations and to demonstrate the effect of customized calibration of IR analysers. In vitro experiment. Six IR anaesthetic monitors (Datex-Ohmeda) and a single portable refractometer (Riken). Both devices were calibrated following the manufacturer's recommendations. Gas samples were collected at common gas outlets of anaesthesia machines. A range of agent concentrations was produced by stepwise changes in dial settings: isoflurane (0-5% in 0.5% increments), sevoflurane (0-8% in 1% increments), or desflurane (0-18% in 2% increments). Oxygen flow was 2 L minute(-1) . The orders of testing IR analysers, agents and dial settings were randomized. Duplicate measurements were performed at each setting. The entire procedure was repeated 24 hours later. Bland-Altman analysis was performed. Measurements on day-1 were used to yield calibration equations (IR measurements as dependent and refractometry measurements as independent variables), which were used to modify the IR measurements on day-2. Bias ± limits of agreement for isoflurane, sevoflurane and desflurane were 0.2 ± 0.3, 0.1 ± 0.4 and 0.7 ± 0.9 volume%, respectively. There were significant linear relationships between differences and means for all agents. The IR analysers became less accurate at higher gas concentrations. After customized calibration, the bias became almost zero and the limits of agreement became narrower. If similar IR analysers are used in research studies, they need to be calibrated against a reference method using the agent in question at multiple calibration points overlapping the range of interest. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.

  12. How hunter perceptions of wildlife regulations, agency trust, and satisfaction affect attitudes about duck bag limits

    USGS Publications Warehouse

    Schroeder, Susan A.; Fulton, David C.; Lawrence, Jeffrey S.; Cordts, Steven D.

    2017-01-01

    This study explored how factors, including the function of bag limits, agency trust, satisfaction, hunting participation, and demographics, related to opinions about duck bag limits. The results are from a survey of 2014 Minnesota resident waterfowl hunters. Analyses identified four dimensions of attitudes about functions of bag limits, including that they: (a) are descriptive in defining the acceptable number of ducks that can be bagged, (b) are injunctive in establishing how many ducks should be allowed to be bagged, (c) ensure fair opportunities for all hunters to bag ducks, and (d) reflect biological limitations to protect waterfowl populations. Descriptive and fairness functions of bag limits were related to opinions about bag limits, as were factors related to agency trust, satisfaction, ducks bagged, experience with more restrictive bag limits, hunter age, and hunting group membership. Agencies may increase support by building trust and emphasizing the descriptive and fairness functions of regulations.

  13. Lindemann histograms as a new method to analyse nano-patterns and phases

    NASA Astrophysics Data System (ADS)

    Makey, Ghaith; Ilday, Serim; Tokel, Onur; Ibrahim, Muhamet; Yavuz, Ozgun; Pavlov, Ihor; Gulseren, Oguz; Ilday, Omer

    The detection, observation, and analysis of material phases and atomistic patterns are of great importance for understanding systems exhibiting both equilibrium and far-from-equilibrium dynamics. As such, there is intense research on phase transitions and pattern dynamics in soft matter, statistical and nonlinear physics, and polymer physics. In order to identify phases and nano-patterns, the pair correlation function is commonly used. However, this approach is limited in terms of recognizing competing patterns in dynamic systems, and lacks visualisation capabilities. In order to solve these limitations, we introduce Lindemann histogram quantification as an alternative method to analyse solid, liquid, and gas phases, along with hexagonal, square, and amorphous nano-pattern symmetries. We show that the proposed approach based on Lindemann parameter calculated per particle maps local number densities to material phase or particles pattern. We apply the Lindemann histogram method on dynamical colloidal self-assembly experimental data and identify competing patterns.

  14. Performance limit of daytime radiative cooling in warm humid environment

    NASA Astrophysics Data System (ADS)

    Suichi, Takahiro; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji

    2018-05-01

    Daytime radiative cooling potentially offers efficient passive cooling, but the performance is naturally limited by the environment, such as the ambient temperature and humidity. Here, we investigate the performance limit of daytime radiative cooling under warm and humid conditions in Okayama, Japan. A cooling device, consisting of alternating layers of SiO2 and poly(methyl methacrylate) on an Al mirror, is fabricated and characterized to demonstrate a high reflectance for sunlight and a selective thermal radiation in the mid-infrared region. In the temperature measurement under the sunlight irradiation, the device shows 3.4 °C cooler than a bare Al mirror, but 2.8 °C warmer than the ambient of 35 °C. The corresponding numerical analyses reveal that the atmospheric window in λ = 16 ˜ 25 μm is closed due to a high humidity, thereby limiting the net emission power of the device. Our study on the humidity influence on the cooling performance provides a general guide line of how one can achieve practical passive cooling in a warm humid environment.

  15. A decade of individual participant data meta-analyses: A review of current practice.

    PubMed

    Simmonds, Mark; Stewart, Gavin; Stewart, Lesley

    2015-11-01

    Individual participant data (IPD) systematic reviews and meta-analyses are often considered to be the gold standard for meta-analysis. In the ten years since the first review into the methodology and reporting practice of IPD reviews was published much has changed in the field. This paper investigates current reporting and statistical practice in IPD systematic reviews. A systematic review was performed to identify systematic reviews that collected and analysed IPD. Data were extracted from each included publication on a variety of issues related to the reporting of IPD review process, and the statistical methods used. There has been considerable growth in the use of "one-stage" methods to perform IPD meta-analyses. The majority of reviews consider at least one covariate other than the primary intervention, either using subgroup analysis or including covariates in one-stage regression models. Random-effects analyses, however, are not often used. Reporting of review methods was often limited, with few reviews presenting a risk-of-bias assessment. Details on issues specific to the use of IPD were little reported, including how IPD were obtained; how data was managed and checked for consistency and errors; and for how many studies and participants IPD were sought and obtained. While the last ten years have seen substantial changes in how IPD meta-analyses are performed there remains considerable scope for improving the quality of reporting for both the process of IPD systematic reviews, and the statistical methods employed in them. It is to be hoped that the publication of the PRISMA-IPD guidelines specific to IPD reviews will improve reporting in this area. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Analyses and forecasts with LAWS winds

    NASA Technical Reports Server (NTRS)

    Wang, Muyin; Paegle, Jan

    1994-01-01

    Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.

  17. Effects of sampling close relatives on some elementary population genetics analyses.

    PubMed

    Wang, Jinliang

    2018-01-01

    Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017

  18. Hα line shape in front of the limiter in the HT-6M tokamak

    NASA Astrophysics Data System (ADS)

    Wan, Baonian; Li, Jiangang; Luo, Jiarong; Xie, Jikang; Wu, Zhenwei; Zhang, Xianmei; HT-6M Group

    1999-11-01

    The Hα line shape in front of the limiter in the HT-6M tokamak is analysed by multi-Gaussian fitting. The energy distribution of neutral hydrogen atoms reveals that Hα radiation is contributed by Franck-Condon atoms, atoms reflected at the limiter surface and charge exchange. Multi-Gaussian fitting of the Hα spectral profile indicates contributions of 60% from reflection particles and 40% from molecule dissociation to recycling. Ion temperatures in central regions are obtained from the spectral width of charge exchange components. Dissociation of hydrogen molecules and reflection of particles at the limiter surface are dominant in edge recycling. Reduction of particle reflection at the limiter surface is important for controlling edge recycling. The measured profiles of neutral hydrogen atom density are reproduced by a particle continuity equation and a simplified one dimensional Monte Carlo simulation code.

  19. Using Nondestructive Portable X-ray Fluorescence Spectrometers on Stone, Ceramics, Metals, and Other Materials in Museums: Advantages and Limitations.

    PubMed

    Tykot, Robert H

    2016-01-01

    Elemental analysis is a fundamental method of analysis on archaeological materials to address their overall composition or identify the source of their geological components, yet having access to instrumentation, its often destructive nature, and the time and cost of analyses have limited the number and/or size of archaeological artifacts tested. The development of portable X-ray fluorescence (pXRF) instruments over the past decade, however, has allowed nondestructive analyses to be conducted in museums around the world, on virtually any size artifact, producing data for up to several hundred samples per day. Major issues have been raised, however, about the sensitivity, precision, and accuracy of these devices, and the limitation of performing surface analysis on potentially heterogeneous objects. The advantages and limitations of pXRF are discussed here regarding archaeological studies of obsidian, ceramics, metals, bone, and painted materials. © The Author(s) 2015.

  20. Thermodynamic analyses and the experimental validation of the Pulse Tube Expander system

    NASA Astrophysics Data System (ADS)

    Jia, Qiming; Gong, Linghui; Feng, Guochao; Zou, Longhui

    2018-04-01

    A Pulse Tube Expander (PTE) for small and medium capacity cryogenic refrigeration systems is described in this paper. An analysis of the Pulse Tube Expander is developed based on the thermodynamic analyses of the system. It is shown that the gas expansion is isentropic in the cold end of the pulse tube. The temperature variation at the outlet of Pulse Tube Expander is measured and the isentropic efficiency is calculated to be 0.455 at 2 Hz. The pressure oscillations in the pulse tube are obtained at different frequencies. The limitations and advantages of this system are also discussed.

  1. The effect of different control point sampling sequences on convergence of VMAT inverse planning

    NASA Astrophysics Data System (ADS)

    Pardo Montero, Juan; Fenwick, John D.

    2011-04-01

    A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of

  2. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less

  4. Meta-Analyses of Predictors of Hope in Adolescents.

    PubMed

    Yarcheski, Adela; Mahon, Noreen E

    2016-03-01

    The purposes of this study were to identify predictors of hope in the literature reviewed, to use meta-analysis to determine the mean effect size (ES) across studies between each predictor and hope, and to examine four moderators on each predictor-hope relationship. Using preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines for the literature reviewed, 77 published studies or doctoral dissertations completed between 1990 and 2012 met the inclusion criteria. Eleven predictors of hope were identified and each predictor in relation to hope was subjected to meta-analysis. Five predictors (positive affect, life satisfaction, optimism, self-esteem, and social support) of hope had large mean ESs, 1 predictor (depression) had a medium ES, 4 predictors (negative affect, stress, academic achievement, and violence) had small ESs, and 1 predictor (gender) had a trivial ES. Findings are interpreted for the 11 predictors in relation to hope. Limitations and conclusions are addressed; future studies are recommended. © The Author(s) 2014.

  5. An umbrella review of meta-analyses of interventions to improve maternal outcomes for teen mothers.

    PubMed

    SmithBattle, Lee; Loman, Deborah G; Chantamit-O-Pas, Chutima; Schneider, Joanne Kraenzle

    2017-08-01

    The purpose of this study was to perform an umbrella review of meta-analyses of intervention studies designed to improve outcomes of pregnant or parenting teenagers. An extensive search retrieved nine reports which provided 21 meta-analyses analyses. Data were extracted by two reviewers. Methodological quality was assessed using the AMSTAR Instrument. Most effect sizes were small but high quality studies showed significant outcomes for reduced low birth weight (RR = 0.60), repeat pregnancies/births (OR = 0.47-0.62), maternal education (OR = 1.21-1.83), and maternal employment (OR = 1.26). Several parenting outcomes (parent-child teaching interaction post-intervention [SMD = -0.91] and at follow-up [SMD = -1.07], and parent-child relationship post-intervention [SMD = -0.71] and at follow-up [SMD = -0.90]) were significant, but sample sizes were very small. Many reports did not include moderator analyses. Behavioral interventions offer limited resources and occur too late to mitigate the educational and social disparities that precede teen pregnancy. Future intervention research and policies that redress the social determinants of early childbearing are recommended. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  6. Global Change Impacts on Future Fire Regimes: Distinguishing Between Climate-limited vs Ignition-Limited Landscapes

    NASA Astrophysics Data System (ADS)

    Keeley, J. E.; Syphard, A. D.

    2016-12-01

    Global warming is expected to exacerbate fire impacts. Predicting how climates will impact future fire regimes requires an understanding of how temperature and precipitation interact to control fire activity. Inevitably this requires historical analyses that relate annual burning to climate variation. Within climatically homogeneous subregions, montane forested landscapes show strong relationships between annual fluctuations in temperature and precipitation with area burned, however, this is strongly seasonal dependent; e.g., winter temperatures have very little or no effect but spring and summer temperatures are critical. Climate models are needed that predict future seasonal temperature changes if we are to forecast future fire regimes in these forests. Climate does not appear to be a major determinant of fire activity on all landscapes. Lower elevations and lower latitudes show little or no increase in fire activity with hotter and drier conditions. On these landscapes climate is not usually limiting to fires but these vegetation types are ignition-limited, and because they are closely juxtaposed with human habitations fire regimes are more strongly controlled by other direct anthropogenic impacts. Predicting future fire regimes is not rocket science, it is far more complicated than that. Climate change is not relevant on some landscapes, but where climate is relevant the relationship will change due to direct climate effects on vegetation trajectories, as well as by feedback processes of fire effects on vegetation distribution, plus policy changes in how we manage ecosystems.

  7. Thermodynamic limit for coherence-limited solar power conversion

    NASA Astrophysics Data System (ADS)

    Mashaal, Heylal; Gordon, Jeffrey M.

    2014-09-01

    The spatial coherence of solar beam radiation is a key constraint in solar rectenna conversion. Here, we present a derivation of the thermodynamic limit for coherence-limited solar power conversion - an expansion of Landsberg's elegant basic bound, originally limited to incoherent converters at maximum flux concentration. First, we generalize Landsberg's work to arbitrary concentration and angular confinement. Then we derive how the values are further lowered for coherence-limited converters. The results do not depend on a particular conversion strategy. As such, they pertain to systems that span geometric to physical optics, as well as classical to quantum physics. Our findings indicate promising potential for solar rectenna conversion.

  8. EEO Implications of Job Analyses.

    ERIC Educational Resources Information Center

    Lacy, D. Patrick, Jr.

    1979-01-01

    Discusses job analyses as they relate to the requirements of Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, and the Rehabilitation Act of 1973. Argues that job analyses can establish the job-relatedness of entrance requirements and aid in defenses against charges of discrimination. Journal availability: see EA 511 615.

  9. Feasibility of potable water generators to meet vessel numeric ballast water discharge limits.

    PubMed

    Albert, Ryan J; Viveiros, Edward; Falatko, Debra S; Tamburri, Mario N

    2017-07-15

    Ballast water is taken on-board vessels into ballast water tanks to maintain vessel draft, buoyancy, and stability. Unmanaged ballast water contains aquatic organisms that, when transported and discharged to non-native waters, may establish as invasive species. Technologies capable of achieving regulatory limits designed to decrease the likelihood of invasion include onboard ballast water management systems. However, to date, the treatment development and manufacturing marketplace is limited to large vessels with substantial ballast requirements. For smaller vessels or vessels with reduced ballast requirements, we evaluated the feasibility of meeting the discharge limits by generating ballast water using onboard potable water generators. Case studies and parametric analyses demonstrated the architectural feasibility of installing potable water generators onboard actual vessels with minimal impacts for most vessel types evaluated. Furthermore, land-based testing of a potable water generator demonstrated capability to meet current numeric discharge limits for living organisms in all size classes. Published by Elsevier Ltd.

  10. Time-Frequency Analyses of Tide-Gauge Sensor Data

    PubMed Central

    Erol, Serdar

    2011-01-01

    The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors’ data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented. PMID:22163829

  11. Time-frequency analyses of tide-gauge sensor data.

    PubMed

    Erol, Serdar

    2011-01-01

    The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors' data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented.

  12. Tracing common origins of Genomic Islands in prokaryotes based on genome signature analyses.

    PubMed

    van Passel, Mark Wj

    2011-09-01

    Horizontal gene transfer constitutes a powerful and innovative force in evolution, but often little is known about the actual origins of transferred genes. Sequence alignments are generally of limited use in tracking the original donor, since still only a small fraction of the total genetic diversity is thought to be uncovered. Alternatively, approaches based on similarities in the genome specific relative oligonucleotide frequencies do not require alignments. Even though the exact origins of horizontally transferred genes may still not be established using these compositional analyses, it does suggest that compositionally very similar regions are likely to have had a common origin. These analyses have shown that up to a third of large acquired gene clusters that reside in the same genome are compositionally very similar, indicative of a shared origin. This brings us closer to uncovering the original donors of horizontally transferred genes, and could help in elucidating possible regulatory interactions between previously unlinked sequences.

  13. High local unemployment rates limit work after lung transplantation.

    PubMed

    Nau, Michael; Shrider, Emily A; Tobias, Joseph D; Hayes, Don; Tumin, Dmitry

    2016-10-01

    Most lung transplant (LTx) recipients recover sufficient functional status to resume working, yet unemployment is common after LTx. Weak local labor markets may limit employment opportunities for LTx recipients. United Network for Organ Sharing data on first-time LTx recipients 18-60 years old who underwent transplant between 2010 and 2014 were linked to American Community Survey data on unemployment rates at the ZIP Code level. Multivariable competing-risks regression modeled the influence of dichotomous (≥8%) and continuous local unemployment rates on employment after LTx, accounting for the competing risk of mortality. For comparison, analyses were duplicated in a cohort of heart transplant (HTx) recipients who underwent transplant during the same period. The analysis included 3,897 LTx and 5,577 HTx recipients. Work after LTx was reported by 300 (16.3%) residents of low-unemployment areas and 244 (11.9%) residents of high-unemployment areas (p < 0.001). Multivariable analysis of 3,626 LTx recipients with complete covariate data found that high local unemployment rates limited employment after LTx (sub-hazard ratio = 0.605; 95% confidence interval = 0.477, 0.768; p < 0.001), conditional on not working before transplant. Employment after HTx was higher compared with employment after LTx, and not associated with local unemployment rates in multivariable analyses. LTx recipients of working age exhibit exceptionally low employment rates. High local unemployment rates exacerbate low work participation after LTx, and may discourage job search in this population. Copyright © 2016 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  14. Origins Space Telescope: Breaking the Confusion Limit

    NASA Astrophysics Data System (ADS)

    Wright, Edward L.; Origins Space Telescope Science and Technology Definition Team

    2018-01-01

    The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, one of the four science and technology definition studies of NASA Headquarters for the 2020 Astronomy and Astrophysics Decadal survey. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s.OST will have a background-limited sensitivity for a background 27,000 times lower than the Herschel background caused by thermal emission from Herschel's warm telescope. For continuum observations the confusion limit in a diffraction-limited survey can be reached in very short integration times at longer far-infrared wavelengths. But the confusion limit can be pierced for both the nearest and the farthest objects to be observed by OST. For outer the Solar System the targets' motion across the sky will provide a clear signature in surveys repeated after an interval of days to months. This will provide a size-frequency distribution of TNOs that is not biased toward high albedo objects.For the distant Universe the first galaxies and the first metals will provide a third dimension of spectral information that can be measured with a long-slit, medium resolution spectrograph. This will allow 3Dmapping to measure source densities as a function of redshift. The continuum shape associated with sourcesat different redshifts can be derived from correlation analyses of these 3D maps.Fairly large sky areas can be scanned by moving the spacecraft at a constant angular rate perpendicular to the orientation of the long slit of the spectrograph, avoiding the high overhead of step-and-stare surveying with a large space observatory.We welcome you to contact the Science and Technology Definition Team (STDT) with your science needs and ideas by emailing us at ost_info@lists.ipac.caltech.edu

  15. Nuclear DNA analyses in genetic studies of populations: practice, problems and prospects.

    PubMed

    Zhang, De-Xing; Hewitt, Godfrey M

    2003-03-01

    Population-genetic studies have been remarkably productive and successful in the last decade following the invention of PCR technology and the introduction of mitochondrial and microsatellite DNA markers. While mitochondrial DNA has proven powerful for genealogical and evolutionary studies of animal populations, and microsatellite sequences are the most revealing DNA markers available so far for inferring population structure and dynamics, they both have important and unavoidable limitations. To obtain a fuller picture of the history and evolutionary potential of populations, genealogical data from nuclear loci are essential, and the inclusion of other nuclear markers, i.e. single copy nuclear polymorphic (scnp) sequences, is clearly needed. Four major uncertainties for nuclear DNA analyses of populations have been facing us, i.e. the availability of scnp markers for carrying out such analysis, technical laboratory hurdles for resolving haplotypes, difficulty in data analysis because of recombination, low divergence levels and intraspecific multifurcation evolution, and the utility of scnp markers for addressing population-genetic questions. In this review, we discuss the availability of highly polymorphic single copy DNA in the nuclear genome, describe patterns and rate of evolution of nuclear sequences, summarize past empirical and theoretical efforts to recover and analyse data from scnp markers, and examine the difficulties, challenges and opportunities faced in such studies. We show that although challenges still exist, the above-mentioned obstacles are now being removed. Recent advances in technology and increases in statistical power provide the prospect of nuclear DNA analyses becoming routine practice, allowing allele-discriminating characterization of scnp loci and microsatellite loci. This certainly will increase our ability to address more complex questions, and thereby the sophistication of genetic analyses of populations.

  16. Wavelets and molecular structure

    NASA Astrophysics Data System (ADS)

    Carson, Mike

    1996-08-01

    The wavelet method offers possibilities for display, editing, and topological comparison of proteins at a user-specified level of detail. Wavelets are a mathematical tool that first found application in signal processing. The multiresolution analysis of a signal via wavelets provides a hierarchical series of `best' lower-resolution approximations. B-spline ribbons model the protein fold, with one control point per residue. Wavelet analysis sets limits on the information required to define the winding of the backbone through space, suggesting a recognizable fold is generated from a number of points equal to 1/4 or less the number of residues. Wavelets applied to surfaces and volumes show promise in structure-based drug design.

  17. Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy

    NASA Astrophysics Data System (ADS)

    Batanova, V. G.; Sobolev, A. V.; Magnin, V.

    2018-01-01

    Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample

  18. Electron/proton spectrometer certification documentation analyses

    NASA Technical Reports Server (NTRS)

    Gleeson, P.

    1972-01-01

    A compilation of analyses generated during the development of the electron-proton spectrometer for the Skylab program is presented. The data documents the analyses required by the electron-proton spectrometer verification plan. The verification plan was generated to satisfy the ancillary hardware requirements of the Apollo Applications program. The certification of the spectrometer requires that various tests, inspections, and analyses be documented, approved, and accepted by reliability and quality control personnel of the spectrometer development program.

  19. Spark and HPC for High Energy Physics Data Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be representedmore » and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.« less

  20. Integration of kinetic isotope effect analyses to elucidate ribonuclease mechanism.

    PubMed

    Harris, Michael E; Piccirilli, Joseph A; York, Darrin M

    2015-11-01

    The well-studied mechanism of ribonuclease A is believed to involve concerted general acid-base catalysis by two histidine residues, His12 and His119. The basic features of this mechanism are often cited to explain rate enhancement by both protein and RNA enzymes that catalyze RNA 2'-O-transphosphorylation. Recent kinetic isotope effect analyses and computational studies are providing a more chemically detailed description of the mechanism of RNase A and the rate limiting transition state. Overall, the results support an asynchronous mechanism for both solution and ribonuclease catalyzed reactions in which breakdown of a transient dianoinic phosphorane intermediate by 5'OP bond cleavage is rate limiting. Relative to non-enzymatic reactions catalyzed by specific base, a smaller KIE on the 5'O leaving group and a less negative βLG are observed for RNase A catalysis. Quantum mechanical calculations consistent with these data support a model in which electrostatic and H-bonding interactions with the non-bridging oxygens and proton transfer from His119 render departure of the 5'O less advanced and stabilize charge buildup in the transition state. Both experiment and computation indicate advanced 2'OP bond formation in the rate limiting transition state. However, this feature makes it difficult to resolve the chemical steps involved in 2'O activation. Thus, modeling the transition state for RNase A catalysis underscores those elements of its chemical mechanism that are well resolved, as well as highlighting those where ambiguity remains. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Published by Elsevier B.V.

  1. Grey literature in meta-analyses.

    PubMed

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  2. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    PubMed

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  3. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?

    PubMed

    McAuley, L; Pham, B; Tugwell, P; Moher, D

    2000-10-07

    The inclusion of only a subset of all available evidence in a meta-analysis may introduce biases and threaten its validity; this is particularly likely if the subset of included studies differ from those not included, which may be the case for published and grey literature (unpublished studies, with limited distribution). We set out to examine whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials. From a random sample of 135 meta-analyses, we identified and retrieved 33 publications that included both grey and published primary studies. The 33 publications contributed 41 separate meta-analyses from several disease areas. General characteristics of the meta-analyses and associated studies and outcome data at the trial level were collected. We explored the effects of the inclusion of grey literature on the quantitative results using logistic-regression analyses. 33% of the meta-analyses were found to include some form of grey literature. The grey literature, when included, accounts for between 4.5% and 75% of the studies in a meta-analysis. On average, published work, compared with grey literature, yielded significantly larger estimates of the intervention effect by 15% (ratio of odds ratios=1.15 [95% CI 1.04-1.28]). Excluding abstracts from the analysis further compounded the exaggeration (1.33 [1.10-1.60]). The exclusion of grey literature from meta-analyses can lead to exaggerated estimates of intervention effectiveness. In general, meta-analysts should attempt to identify, retrieve, and include all reports, grey and published, that meet predefined inclusion criteria.

  4. How to limit false positives in environmental DNA and metabarcoding?

    PubMed

    Ficetola, Gentile Francesco; Taberlet, Pierre; Coissac, Eric

    2016-05-01

    Environmental DNA (eDNA) and metabarcoding are boosting our ability to acquire data on species distribution in a variety of ecosystems. Nevertheless, as most of sampling approaches, eDNA is not perfect. It can fail to detect species that are actually present, and even false positives are possible: a species may be apparently detected in areas where it is actually absent. Controlling false positives remains a main challenge for eDNA analyses: in this issue of Molecular Ecology Resources, Lahoz-Monfort et al. () test the performance of multiple statistical modelling approaches to estimate the rate of detection and false positives from eDNA data. Here, we discuss the importance of controlling for false detection from early steps of eDNA analyses (laboratory, bioinformatics), to improve the quality of results and allow an efficient use of the site occupancy-detection modelling (SODM) framework for limiting false presences in eDNA analysis. © 2016 John Wiley & Sons Ltd.

  5. Study of Residual Gas Analyser (RGA) Response towards Known Leaks

    NASA Astrophysics Data System (ADS)

    Pathan, Firozkhan S.; Khan, Ziauddin; Semwal, Pratibha; George, Siju; Raval, Dilip C.; Thankey, Prashant L.; Manthena, Himabindu; Yuvakiran, Paravastu; Dhanani, Kalpesh R.

    2012-11-01

    Helium leak testing is the most versatile form of weld qualification test for any vacuum application. Almost every ultra-high vacuum (UHV) system utilizes this technique for insuring leak tightness for the weld joints as well as demountable joints. During UHV system under operational condition with many other integrated components, in-situ developed leaks identification becomes one of the prime aspect for maintaining the health of such system and for continuing the experiments onwards. Since online utilization of leak detector (LD) has many practical limitations, residual gas analyser (RGA) can be used as a potential instrument for online leak detection. For this purpose, a co-relation for a given leak rate between Leak Detector and RGA is experimentally established. This paper describes the experimental aspect and the relationship between leak detector and RGA.

  6. Impact of workstations on criticality analyses at ABB combustion engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bitmore » word size and function as the computer servers and network administrative CPUS, providing a virtual memory system.« less

  7. Angiotensin-converting Enzyme Inhibitor and Statin Medication Use and Incident Mobility Limitation in Community Older Adults. The Health, Aging and Body Composition Study

    PubMed Central

    Gray, Shelly L.; Boudreau, Robert M.; Newman, Anne B.; Studenski, Stephanie A.; Shorr, Ronald I; Bauer, Douglas C.; Simonsick, Eleanor M.; Hanlon, Joseph T

    2012-01-01

    Objective Angiotensin-converting enzyme (ACE) inhibitors and statin medications have been proposed as potential agents to prevent or delay physical disability; yet limited research has evaluated whether such use in older community dwelling adults is associated with a lower risk of incident mobility limitation. Design Longitudinal cohort study Setting Health, Aging and Body Composition (Health ABC) Participants 3055 participants who were well functioning at baseline (e.g., no mobility limitations). Measurements Summated standardized daily doses (low, medium and high) and duration of ACE inhibitor and statin use was computed. Mobility limitation (two consecutive self-reports of having any difficulty walking 1/4 mile or climbing 10 steps without resting) was assessed every 6 months after baseline. Multivariable Cox proportional hazard analyses were conducted adjusting for demographics, health status, and health behaviors. Results At baseline, ACE inhibitors and statins were used by 15.2% and 12.9%, respectively and both increased to over 25% by year 6. Over 6.5 years of follow-up, 49.8% had developed mobility limitation. In separate multivariable models, neither ACE inhibitor (multivariate hazard ratio [HR] 0.95; 95% confidence interval [CI] 0.82–1.09) nor statin use (multivariate HR 1.02; 95% CI 0.87–1.17) was associated with a lower risk for mobility limitation. Similar findings were seen in analyses examining dose- and duration-response relationships and sensitivity analyses restricted to those with hypertension. Conclusions These findings indicate that ACE inhibitors and statins widely prescribed to treat hypertension and hypercholesterolemia, respectively do not lower risk of mobility limitation, an important life quality indicator. PMID:22092102

  8. Integrated analyses for genetic markers of polycystic ovary syndrome with 9 case-control studies of gene expression profiles.

    PubMed

    Lu, Chenqi; Liu, Xiaoqin; Wang, Lin; Jiang, Ning; Yu, Jun; Zhao, Xiaobo; Hu, Hairong; Zheng, Saihua; Li, Xuelian; Wang, Guiying

    2017-01-10

    Due to genetic heterogeneity and variable diagnostic criteria, genetic studies of polycystic ovary syndrome are particularly challenging. Furthermore, lack of sufficiently large cohorts limits the identification of susceptibility genes contributing to polycystic ovary syndrome. Here, we carried out a systematic search of studies deposited in the Gene Expression Omnibus database through August 31, 2016. The present analyses included studies with: 1) patients with polycystic ovary syndrome and normal controls, 2) gene expression profiling of messenger RNA, and 3) sufficient data for our analysis. Ultimately, a total of 9 studies with 13 datasets met the inclusion criteria and were performed for the subsequent integrated analyses. Through comprehensive analyses, there were 13 genetic factors overlapped in all datasets and identified as significant specific genes for polycystic ovary syndrome. After quality control assessment, there were six datasets remained. Further gene ontology enrichment and pathway analyses suggested that differentially expressed genes mainly enriched in oocyte pathways. These findings provide potential molecular markers for diagnosis and prognosis of polycystic ovary syndrome, and need in-depth studies on the exact function and mechanism in polycystic ovary syndrome.

  9. Crush Analyses of Multi-Level Equipment

    DOT National Transportation Integrated Search

    2006-11-06

    Non-linear large deformation crush analyses were conducted on a multi-level cab car typical of those in operation by the Southern California Regional Rail Authority (SCRRA) in California. The motivation for these analyses was a collision, which occur...

  10. Cycle O(CY1991) NLS trade studies and analyses report. Book 2, part 2: Propulsion

    NASA Technical Reports Server (NTRS)

    Cronin, R.; Werner, M.; Bonson, S.; Spring, R.; Houston, R.

    1992-01-01

    This report documents the propulsion system tasks performed in support of the National Launch System (NLS) Cycle O preliminary design activities. The report includes trades and analyses covering the following subjects: (1) Maximum Tank Stretch Study; (2) No LOX Bleed Performance Analysis; (3) LOX Bleed Trade Study; (4) LO2 Tank Pressure Limits; (5) LOX Tank Pressurization System Using Helium; (6) Space Transportation Main Engine (STME) Heat Exchanger Performance; (7) LH2 Passive Recirculation Performance Analysis; (8) LH2 Bleed/Recirculation Study; (9) LH2 Tank Pressure Limits; and (10) LH2 Pressurization System. For each trade study an executive summary and a detailed trade study are provided. For the convenience of the reader, a separate section containing a compilation of only the executive summaries is also provided.

  11. Chunk Limits and Length Limits in Immediate Recall: A Reconciliation

    PubMed Central

    Chen, Zhijian; Cowan, Nelson

    2008-01-01

    Whereas some research on immediate recall of verbal lists has suggested that it is limited by the number of chunks that can be recalled (e.g., Tulving & Patkau, 1962; Cowan, Chen, & Rouder, 2004), other research has suggested that it is limited by the length of the material to be recalled (e.g., Baddeley, Thomson, & Buchanan, 1975). We investigated this question by teaching new paired associations between words to create two-word chunks. The results suggest that both chunk capacity limits and length limits come into play. For the free recall of 12-word lists, 6 pre-learned pairs could be recalled about as well as 6 pre-exposed singletons, suggesting a chunk limit. However, for the serially-ordered recall of 8-word lists, 4 pre-learned pairs could be recalled about as well as 8 pre-exposed singletons, suggesting a length limit. Other conditions yielded intermediate results suggesting that sometimes both limits may operate together. PMID:16393043

  12. Study and Analyses on the Structural Performance of a Balance

    NASA Technical Reports Server (NTRS)

    Karkehabadi, R.; Rhew, R. D.; Hope, D. J.

    2004-01-01

    Strain-gauge balances for use in wind tunnels have been designed at Langley Research Center (LaRC) since its inception. Currently Langley has more than 300 balances available for its researchers. A force balance is inherently a critically stressed component due to the requirements of measurement sensitivity. The strain-gauge balances have been used in Langley s wind tunnels for a wide variety of aerodynamic tests, and the designs encompass a large array of sizes, loads, and environmental effects. There are six degrees of freedom that a balance has to measure. The balance s task to measure these six degrees of freedom has introduced challenging work in transducer development technology areas. As the emphasis increases on improving aerodynamic performance of all types of aircraft and spacecraft, the demand for improved balances is at the forefront. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses and research done at LaRC that influence structural integrity of the balances. The analyses are helpful in understanding the overall behavior of existing balances and can be used in the design of new balances to enhance performance. Initially, a maximum load combination was used for a linear structural analysis. When nonlinear effects were encountered, the analysis was extended to include nonlinearities using MSC.Nastran . Because most of the balances are designed using Pro/Mechanica , it is desirable and efficient to use Pro/Mechanica for stress analysis. However, Pro/Mechanica is limited to linear analysis. Both Pro/Mechanica and MSC.Nastran are used for analyses in the present work. The structural integrity of balances and the possibility of modifying existing balances to enhance structural integrity are investigated.

  13. Validating Experimental and Theoretical Langmuir Probe Analyses

    NASA Astrophysics Data System (ADS)

    Pilling, Lawrence Stuart; Carnegie, Dale

    2004-11-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a DC discharge plasma over a wide variety of conditions. This discharge contains a dual temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital motion limited (OML) is approximately the same as the radial motion gradients. An analysis of the gradients from the radial motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature. Only the position of the space charge potential is necessary to determine the applicable theory.

  14. Validating experimental and theoretical Langmuir probe analyses

    NASA Astrophysics Data System (ADS)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  15. Exploring the limits of EDS microanalysis: rare earth element analyses

    NASA Astrophysics Data System (ADS)

    Ritchie, N. W. M.; Newbury, D. E.; Lowers, H.; Mengason, M.

    2018-01-01

    It is a great time to be a microanalyst. After a few decades of incremental progress in energy-dispersive X-ray spectrometry (EDS), the last decade has seen the accuracy and precision surge forward. Today, the question is not whether EDS is generally useful but to identify the types of problems for which wavelength-dispersive X-ray spectrometry remains the better choice. The full extent of EDS’s capabilities has surprised many. Low Z, low energy, and trace element detection have been demonstrated even in the presence of extreme peak interferences. In this paper, we will summarise the state-of-the-art and investigate a challenging problem domain, the analysis of minerals bearing multiple rare-earth elements.

  16. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  17. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico.

    PubMed

    Haer, Toon; Botzen, W J Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M; Ward, Philip J

    2018-06-13

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Author(s).

  18. Reference limits for urinary fractional excretion of electrolytes in adult non-racing Greyhound dogs.

    PubMed

    Bennett, S L; Abraham, L A; Anderson, G A; Holloway, S A; Parry, B W

    2006-11-01

    To determine reference limits for urinary fractional excretion of electrolytes in Greyhound dogs. Urinary fractional excretion was calculated using a spot clearance method preceded by a 16 to 20 hour fast in 48 Greyhound dogs. Raw data analysed using the bootstrap estimate was used to calculate the reference limits. The observed range for urinary fractional excretion in Greyhound dogs was 0.0 to 0.77% for sodium, 0.9 to 14.7% for potassium, 0 to 0.66% for chloride, 0.03 to 0.22% for calcium and 0.4 to 20.1% for phosphate. Expressed as percentages, the suggested reference limits for fractional excretion in Greyhound dogs are as follows: sodium < or = 0.72, potassium < or = 12.2, chloride < or = 0.55, calcium < or = 0.13 and phosphate < or = 16.5. Veterinary practitioners may use these reference limits for urinary electrolyte fractional excretion when investigating renal tubular disease in Greyhound dogs.

  19. Comparative effectiveness and cost-effectiveness analyses frequently agree on value.

    PubMed

    Glick, Henry A; McElligott, Sean; Pauly, Mark V; Willke, Richard J; Bergquist, Henry; Doshi, Jalpa; Fleisher, Lee A; Kinosian, Bruce; Perfetto, Eleanor; Polsky, Daniel E; Schwartz, J Sanford

    2015-05-01

    The Patient-Centered Outcomes Research Institute, known as PCORI, was established by Congress as part of the Affordable Care Act (ACA) to promote evidence-based treatment. Provisions of the ACA prohibit the use of a cost-effectiveness analysis threshold and quality-adjusted life-years (QALYs) in PCORI comparative effectiveness studies, which has been understood as a prohibition on support for PCORI's conducting conventional cost-effectiveness analyses. This constraint complicates evidence-based choices where incremental improvements in outcomes are achieved at increased costs of care. How frequently this limitation inhibits efficient cost containment, also a goal of the ACA, depends on how often more effective treatment is not cost-effective relative to less effective treatment. We examined the largest database of studies of comparisons of effectiveness and cost-effectiveness to see how often there is disagreement between the more effective treatment and the cost-effective treatment, for various thresholds that may define good value. We found that under the benchmark assumption, disagreement between the two types of analyses occurs in 19 percent of cases. Disagreement is more likely to occur if a treatment intervention is musculoskeletal and less likely to occur if it is surgical or involves secondary prevention, or if the study was funded by a pharmaceutical company. Project HOPE—The People-to-People Health Foundation, Inc.

  20. Mortality Among Workers Exposed to Toluene Diisocyanate in the US Polyurethane Foam Industry: Update and Exposure-Response Analyses

    PubMed Central

    Pinkerton, Lynne E.; Yiin, James H.; Daniels, Robert D.; Fent, Kenneth W.

    2017-01-01

    Background Mortality among 4,545 toluene diisocyante (TDI)-exposed workers was updated through 2011. The primary outcome of interest was lung cancer. Methods Life table analyses, including internal analyses by exposure duration and cumulative TDI exposure, were conducted. Results Compared with the US population, all cause and all cancer mortality was increased. Lung cancer mortality was increased but was not associated with exposure duration or cumulative TDI exposure. In post hoc analyses, lung cancer mortality was associated with employment duration in finishing jobs, but not in finishing jobs involving cutting polyurethane foam. Conclusions Dermal exposure, in contrast to inhalational exposure, to TDI is expected to be greater in finishing jobs and may play a role in the observed increase in lung cancer mortality. Limitations include the lack of smoking data, uncertainty in the exposure estimates, and exposure estimates that reflected inhalational exposure only. PMID:27346061

  1. Exercise Ventilatory Limitation: The Role Of Expiratory Flow Limitation

    PubMed Central

    Babb, Tony G.

    2012-01-01

    Ventilatory limitation to exercise remains an important unresolved clinical issue; as a result, many individuals misinterpret the effects of expiratory flow limitation as an all-or-nothing phenomenon. Expiratory flow limitation is not all-or-none; approaching maximal expiratory flow can have important effects not only on ventilatory capacity but also on breathing mechanics, ventilatory control, and possibly exertional dyspnea and exercise intolerance. PMID:23038244

  2. Performance evaluation of the automated nucleated red blood cell enumeration on Sysmex XN analyser.

    PubMed

    Tantanate, C; Klinbua, C

    2015-06-01

    Presence of peripheral blood nucleated red blood cell (NRBC) is associated with pathological conditions and leads to the overestimation of white blood cell count in automated haematology analysers (HA). The authors evaluated NRBC enumeration by a new HA Sysmex XN (XN) to demonstrate the precision and comparability to manual count (MC) at the various NRBC values. Specimens with initially NRBC positive were included. For precision assessment, 8 levels of NRBCs were repeatedly analysed. For comparison study, 234 specimens were analysed by both XN and MC. For precision study, the percentage of coefficient of variation ranged from 14% to 45.6% and 1.2% to 4.4% for MC and XN, respectively. For comparison study between XN and MC, NRBCs ranged from 0% to 612.5%. Regression analysis demonstrated an r(2) of 0.98. The mean bias of 14.1% with 95% limits of agreement between -48.76% and 76.95% was found. The NRBC counts from XN appeared to be more in accordance with MC when the NRBCs were lower than 200% with the concordance rate of 94.2%. The automated NRBC enumeration by XN was precise and could replace the traditional MC, especially for the specimens with NRBCs lower than 200%. © 2014 John Wiley & Sons Ltd.

  3. In-situ generation of carrier gases for scientific analyses on Mars

    NASA Technical Reports Server (NTRS)

    Finn, J. E.; Sridhar, K. R.

    1997-01-01

    The search for useful raw materials on planetary surfaces will involve various scientific analyses of soil and rock samples. The devices performing these measurements often require inert carrier gases for moving analytes and purging instrumentation. At present, the carrier or sweep gas must be carried from Earth in a compressed gas cylinder, and so the supply of this depletable resource sets a hard limit on the (flexible) life span of the experiment. If a suitable carrier gas could be produced in-situ, then the scientific return of exploration missions could be extended and enhanced greatly. Many more samples could be analyzed, long-ranging rovers could have independent gas supplies, and designs could have added flexibility with respect to gas consumption.

  4. What Is the Criterion of Interest in Identifying Limited-English Speaking Students: Language Dominance or Proficiency?

    ERIC Educational Resources Information Center

    Estes, Gary D.; Estes, Carole

    The issue of using language proficiency or language dominance to assess programs for high school students with limited English speaking backgrounds is addressed. The development and initial analyses of the Competency Based Oral Language Assessment (COLA) are discussed. Three components of oral language are rated separately: semantics; syntax and…

  5. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  6. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  7. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  8. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  9. Character Recognition Method by Time-Frequency Analyses Using Writing Pressure

    NASA Astrophysics Data System (ADS)

    Watanabe, Tatsuhito; Katsura, Seiichiro

    With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.

  10. Reappraisal of previously reported meta-analyses on antibiotic prophylaxis for low-risk laparoscopic cholecystectomy: an overview of systematic reviews

    PubMed Central

    Matsui, Yoichi; Satoi, Sohei; Hirooka, Satoshi; Kosaka, Hisashi; Kawaura, Takayuki; Kitawaki, Tomoki

    2018-01-01

    Introduction Many researchers have addressed overdosage and inappropriate use of antibiotics. Many meta-analyses have investigated antibiotic prophylaxis for low-risk laparoscopic cholecystectomy with the aim of reducing unnecessary antibiotic use. Most of these meta-analyses have concluded that prophylactic antibiotics are not required for low-risk laparoscopic cholecystectomies. This study aimed to assess the validity of this conclusion by systematically reviewing these meta-analyses. Methods A systematic review was undertaken. Searches were limited to meta-analyses and systematic reviews. PubMed and Cochrane Library electronic databases were searched from inception until March 2016 using the following keyword combinations: ‘antibiotic prophylaxis’, ‘laparoscopic cholecystectomy’ and ‘systematic review or meta-analysis’. Two independent reviewers selected meta-analyses or systematic reviews evaluating prophylactic antibiotics for laparoscopic cholecystectomy. All of the randomised controlled trials (RCTs) analysed in these meta-analyses were also reviewed. Results Seven meta-analyses regarding prophylactic antibiotics for low-risk laparoscopic cholecystectomy that had examined a total of 28 RCTs were included. Review of these meta-analyses revealed 48 miscounts of the number of outcomes. Six RCTs were inappropriate for the meta-analyses; one targeted patients with acute cholecystitis, another measured inappropriate outcomes, the original source of a third was not found and the study protocols of the remaining three were not appropriate for the meta-analyses. After correcting the above miscounts and excluding the six inappropriate RCTs, pooled risk ratios (RRs) were recalculated. These showed that, contrary to what had previously been concluded, antibiotics significantly reduced the risk of postoperative infections. The rates of surgical site, distant and overall infections were all significantly reduced by antibiotic administration (RR (95% CI); 0

  11. Archival analyses of eyewitness identification test outcomes: what can they tell us about eyewitness memory?

    PubMed

    Horry, Ruth; Halford, Paul; Brewer, Neil; Milne, Rebecca; Bull, Ray

    2014-02-01

    Several archival studies of eyewitness identification have been conducted, but the results have been inconsistent and contradictory. We identify some avoidable pitfalls that have been present in previous analyses and present new data that address these pitfalls. We explored associations among various estimator variables and lineup outcomes for 833 "real life" lineups, including 588 lineups in which corroborating evidence of the suspect's guilt existed. Suspect identifications were associated with exposure duration, viewing distance, and the age of the witness. Nonidentifications were associated with the number of perpetrators. We also consider some of the inherent, unavoidable limitations with archival studies and consider what such studies can really tell researchers. We conclude that differences in sampling prohibit sensible comparisons between the results of laboratory and archival studies, and that the informational value of archival studies is actually rather limited.

  12. System-Wide Adaptations of Desulfovibrio alaskensis G20 to Phosphate-Limited Conditions

    DOE PAGES

    Bosak, Tanja; Schubotz, Florence; de Santiago-Torio, Ana; ...

    2016-12-28

    The prevalence of lipids devoid of phosphorus suggests that the availability of phosphorus limits microbial growth and activity in many anoxic, stratified environments. To better understand the response of anaerobic bacteria to phosphate limitation and starvation, this study combines microscopic and lipid analyses with the measurements of fitness of pooled barcoded transposon mutants of the model sulfate reducing bacterium Desulfovibrio alaskensis G20. Phosphate-limited G20 has lower growth rates and replaces more than 90% of its membrane phospholipids by a mixture of monoglycosyl diacylglycerol (MGDG), glycuronic acid diacylglycerol (GADG) and ornithine lipids, lacks polyphosphate granules, and synthesizes other cellular inclusions. Analysesmore » of pooled and individual mutants reveal the importance of the high-affinity phosphate transport system (the Pst system), PhoR, and glycolipid and ornithine lipid synthases during phosphate limitation. The phosphate-dependent synthesis of MGDG in G20 and the widespread occurrence of the MGDG/GADG synthase among sulfate reducing @-Proteobacteria implicate these microbes in the production of abundant MGDG in anaerobic environments where the concentrations of phosphate are lower than 10 μM. Numerous predicted changes in the composition of the cell envelope and systems involved in transport, maintenance of cytoplasmic redox potential, central metabolism and regulatory pathways also suggest an impact of phosphate limitation on the susceptibility of sulfate reducing bacteria to other anthropogenic or environmental stresses.« less

  13. Organized Sport Participation and Physical Activity Levels among Adolescents with Functional Limitations

    PubMed Central

    2017-01-01

    Sufficient and regular physical activity is considered a protective factor, reducing the onset of secondary disability conditions in adolescents with chronic diseases and functional limitations. The aim of this study was to explore whether participation in organized sport may be associated to higher levels of physical activity in adolescents with functional limitations, based on a national representative sample. Data from the Health Behaviour in School-aged Children (HBSC) study collected in Finland from two data collection rounds (2002 and 2010) were conducted and pooled from adolescents aged between 13 and 15 years old with functional limitations (n = 1041). Differences in self-reported physical activity over the past week and participation in organized sport activity were analysed for each function. Overall, four in ten (n = 413) participated in organized sport and were significantly (p < 0.001) more physically active (mean = 4.92 days, SD = 1.81) than their non-participating (mean = 3.29, SD = 1.86) peers with functional limitations. Despite low population prevalence, adolescents with epilepsy or visual impairments were the least active if they were not participating in organized sport, yet were the most active if they were involved in organized sport. Participating in organized sport appears to be an important factor promoting resources for maintaining recommended levels of physical activity in Finnish adolescents with functional limitations. PMID:29910441

  14. System-Wide Adaptations of Desulfovibrio alaskensis G20 to Phosphate-Limited Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosak, Tanja; Schubotz, Florence; de Santiago-Torio, Ana

    The prevalence of lipids devoid of phosphorus suggests that the availability of phosphorus limits microbial growth and activity in many anoxic, stratified environments. To better understand the response of anaerobic bacteria to phosphate limitation and starvation, this study combines microscopic and lipid analyses with the measurements of fitness of pooled barcoded transposon mutants of the model sulfate reducing bacterium Desulfovibrio alaskensis G20. Phosphate-limited G20 has lower growth rates and replaces more than 90% of its membrane phospholipids by a mixture of monoglycosyl diacylglycerol (MGDG), glycuronic acid diacylglycerol (GADG) and ornithine lipids, lacks polyphosphate granules, and synthesizes other cellular inclusions. Analysesmore » of pooled and individual mutants reveal the importance of the high-affinity phosphate transport system (the Pst system), PhoR, and glycolipid and ornithine lipid synthases during phosphate limitation. The phosphate-dependent synthesis of MGDG in G20 and the widespread occurrence of the MGDG/GADG synthase among sulfate reducing @-Proteobacteria implicate these microbes in the production of abundant MGDG in anaerobic environments where the concentrations of phosphate are lower than 10 μM. Numerous predicted changes in the composition of the cell envelope and systems involved in transport, maintenance of cytoplasmic redox potential, central metabolism and regulatory pathways also suggest an impact of phosphate limitation on the susceptibility of sulfate reducing bacteria to other anthropogenic or environmental stresses.« less

  15. Adenylylation of mycobacterial Glnk (PII) protein is induced by nitrogen limitation

    PubMed Central

    Williams, Kerstin J.; Bennett, Mark H.; Barton, Geraint R.; Jenkins, Victoria A.; Robertson, Brian D.

    2013-01-01

    Summary PII proteins are pivotal regulators of nitrogen metabolism in most prokaryotes, controlling the activities of many targets, including nitrogen assimilation enzymes, two component regulatory systems and ammonium transport proteins. Escherichia coli contains two PII-like proteins, PII (product of glnB) and GlnK, both of which are uridylylated under nitrogen limitation at a conserved Tyrosine-51 residue by GlnD (a uridylyl transferase). PII-uridylylation in E. coli controls glutamine synthetase (GS) adenylylation by GlnE and mediates the NtrB/C transcriptomic response. Mycobacteria contain only one PII protein (GlnK) which in environmental Actinomycetales is adenylylated by GlnD under nitrogen limitation. However in mycobacteria, neither the type of GlnK (PII) covalent modification nor its precise role under nitrogen limitation is known. In this study, we used LC-Tandem MS to analyse the modification state of mycobacterial GlnK (PII), and demonstrate that during nitrogen limitation GlnK from both non-pathogenic Mycobacterium smegmatis and pathogenic Mycobacterium tuberculosis is adenylylated at the Tyrosine-51 residue; we also show that GlnD is the adenylyl transferase enzyme responsible. Further analysis shows that in contrast to E. coli, GlnK (PII) adenylylation in M. tuberculosis does not regulate GS adenylylation, nor does it mediate the transcriptomic response to nitrogen limitation. PMID:23352854

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or timing of cash flows are uncertain and are not fixed under § 436.14, Federal agencies may examine the impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order...

  17. Resolving anatomical and functional structure in human brain organization: identifying mesoscale organization in weighted network representations.

    PubMed

    Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M

    2014-10-01

    Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less

  19. Wavelet processing techniques for digital mammography

    NASA Astrophysics Data System (ADS)

    Laine, Andrew F.; Song, Shuwu

    1992-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  20. Combustion Stability Analyses of Coaxial Element Injectors with Liquid Oxygen/Liquid Methane Propellants

    NASA Technical Reports Server (NTRS)

    Hulka, J. R.

    2010-01-01

    Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for in-space vehicles. This propellant combination has not been previously used in a flight-qualified engine system, so limited test data and analysis results are available at this stage of early development. NASA has funded several hardware-oriented activities with oxygen and methane propellants over the past several years with the Propulsion and Cryogenic Advanced Development (PCAD) project, under the Exploration Technology Development Program. As part of this effort, the NASA Marshall Space Flight Center has conducted combustion stability analyses of several of the configurations. This paper presents test data and analyses of combustion stability from the recent PCAD-funded test programs at the NASA MSFC. These test programs used swirl coaxial element injectors with liquid oxygen and liquid methane propellants. Oxygen was injected conventionally in the center of the coaxial element, and swirl was provided by tangential entry slots. Injectors with 28-element and 40-element patterns were tested with several configurations of combustion chambers, including ablative and calorimeter spool sections, and several configurations of fuel injection design. Low frequency combustion instability (chug) occurred with both injectors, and high-frequency combustion instability occurred at the first tangential (1T) transverse mode with the 40-element injector. In most tests, a transition between high-amplitude chug with gaseous methane flow and low-amplitude chug with liquid methane flow was readily observed. Chug analyses of both conditions were conducted using techniques from Wenzel and Szuch and from the Rocket Combustor Interactive Design and Analysis (ROCCID) code. The 1T mode instability occurred in several tests and was apparent by high-frequency pressure measurements as well as dramatic increases in calorimeter-measured heat flux

  1. Revise and resubmit: How real-time parsing limitations influence grammar acquisition

    PubMed Central

    Pozzan, Lucia; Trueswell, John C.

    2015-01-01

    We present the results from a three-day artificial language learning study on adults. The study examined whether sentence-parsing limitations, in particular, difficulties revising initial syntactic/semantic commitments during comprehension, shape learners’ ability to acquire a language. Findings show that both comprehension and production of morphology pertaining to sentence argument structure are delayed when this morphology consistently appears at the end, rather than at the beginning, of sentences in otherwise identical grammatical systems. This suggests that real-time processing constraints impact acquisition; morphological cues that tend to guide linguistic analyses are easier to learn than cues that revise these analyses. Parallel performance in production and comprehension indicates that parsing constraints affect grammatical acquisition, not just real-time commitments. Properties of the linguistic system (e.g., ordering of cues within a sentence) interact with the properties of the cognitive system (cognitive control and conflict-resolution abilities) and together affect language acquisition. PMID:26026607

  2. Rhythmic Interlimb Coordination Impairments and the Risk for Developing Mobility Limitations.

    PubMed

    James, Eric G; Leveille, Suzanne G; Hausdorff, Jeffrey M; Travison, Thomas; Kennedy, David N; Tucker, Katherine L; Al Snih, Soham; Markides, Kyriakos S; Bean, Jonathan F

    2017-08-01

    The identification of novel rehabilitative impairments that are risk factors for mobility limitations may improve their prevention and treatment among older adults. We tested the hypothesis that impaired rhythmic interlimb ankle and shoulder coordination are risk factors for subsequent mobility limitations among older adults. We conducted a 1-year prospective cohort study of community-dwelling older adults (N = 99) aged 67 years and older who did not have mobility limitations (Short Physical Performance Battery score > 9) at baseline. Participants performed antiphase coordination of the right and left ankles or shoulders while paced by an auditory metronome. Using multivariable logistic regression, we determined odds ratios (ORs) for mobility limitations at 1-year follow-up as a function of coordination variability and asymmetry. After adjusting for age, sex, body mass index, Mini-Mental State Examination score, number of chronic conditions, and baseline Short Physical Performance Battery score, ORs were significant for developing mobility limitations based on a 1 SD difference in the variability of ankle (OR = 1.88; 95% confidence interval [CI]: 1.16-3.05) and shoulder (OR = 1.96; 95% CI: 1.17-3.29) coordination. ORs were significant for asymmetry of shoulder (OR = 2.11; 95% CI: 1.25-3.57), but not ankle (OR = 0.95; 95% CI: 0.59-1.55) coordination. Similar results were found in unadjusted analyses. The results support our hypothesis that impaired interlimb ankle and shoulder coordination are risk factors for the development of mobility limitations. Future work is needed to further examine the peripheral and central mechanisms underlying this relationship and to test whether enhancing coordination alters mobility limitations. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The Malpelo Plate Hypothesis and implications for nonclosure of the Cocos-Nazca-Pacific plate motion circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Tuo; Gordon, Richard G.; Mishra, Jay K.; Wang, Chengzu

    2017-08-01

    Using global multiresolution topography, we estimate new transform-fault azimuths along the Cocos-Nazca plate boundary and show that the direction of relative plate motion is 3.3° ± 1.8° (95% confidence limits) clockwise of prior estimates. The new direction of Cocos-Nazca plate motion is, moreover, 4.9° ± 2.7° (95% confidence limits) clockwise of the azimuth of the Panama transform fault. We infer that the plate east of the Panama transform fault is not the Nazca plate but instead is a microplate that we term the Malpelo plate. With the improved transform-fault data, the nonclosure of the Nazca-Cocos-Pacific plate motion circuit is reduced from 15.0 mm a-1 ± 3.8 mm a-1 to 11.6 mm a-1 ± 3.8 mm a-1 (95% confidence limits). The nonclosure seems too large to be due entirely to horizontal thermal contraction of oceanic lithosphere and suggests that one or more additional plate boundaries remain to be discovered.

  4. Statistical properties and pre-hit dynamics of price limit hits in the Chinese stock markets.

    PubMed

    Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing

    2015-01-01

    Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders' short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners.

  5. Native and nonnative fish populations of the Colorado River are food limited--evidence from new food web analyses

    USGS Publications Warehouse

    Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O.; Baxter, Colden V.; Rosi-Marshall, Emma J.

    2013-01-01

    Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.

  6. Parsimony and Model-Based Analyses of Indels in Avian Nuclear Genes Reveal Congruent and Incongruent Phylogenetic Signals

    PubMed Central

    Yuri, Tamaki; Kimball, Rebecca T.; Harshman, John; Bowie, Rauri C. K.; Braun, Michael J.; Chojnowski, Jena L.; Han, Kin-Lan; Hackett, Shannon J.; Huddleston, Christopher J.; Moore, William S.; Reddy, Sushma; Sheldon, Frederick H.; Steadman, David W.; Witt, Christopher C.; Braun, Edward L.

    2013-01-01

    Insertion/deletion (indel) mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP) and a simple maximum likelihood (ML) framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions. PMID:24832669

  7. Spectroscopic Analyses of the Biofuels-Critical Phytochemical Coniferyl Alcohol and Its Enzyme-Catalyzed Oxidation Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achyuthan, Komandoor; Adams, Paul; Simmons, Blake

    2011-07-13

    Lignin composition (monolignol types of coniferyl, sinapyl or p-coumaryl alcohol) is causally related to biomass recalcitrance. We describe multiwavelength (220, 228, 240, 250, 260, 290, 295, 300, 310 or 320 nm) absorption spectroscopy of coniferyl alcohol and its laccase- or peroxidase-catalyzed products during real time kinetic, pseudo-kinetic and endpoint analyses, in optical turn on or turn off modes, under acidic or basic conditions. Reactions in microwell plates and 100 mu L volumes demonstrated assay miniaturization and high throughput screening capabilities. Bathochromic and hypsochromic shifts along with hyperchromicity or hypochromicity accompanied enzymatic oxidations by laccase or peroxidase. The limits of detectionmore » and quantitation of coniferyl alcohol averaged 2.4 and 7.1 mu M respectively, with linear trend lines over 3 to 4 orders of magnitude. Coniferyl alcohol oxidation was evident within 10 minutes or with 0.01 mu g/mL laccase and 2 minutes or 0.001 mu g/mL peroxidase. Detection limit improved to 1.0 mu M coniferyl alcohol with Km of 978.7 +/- 150.7 mu M when examined at 260 nm following 30 minutes oxidation with 1.0 mu g/mL laccase. Our assays utilized the intrinsic spectroscopic properties of coniferyl alcohol or its oxidation products for enabling detection, without requiring chemical synthesis or modification of the substrate or product(s). These studies facilitate lignin compositional analyses and augment pretreatment strategies for reducing biomass recalcitrance.« less

  8. Efforts to Increase Social Contact in Persons with Profound Intellectual and Multiple Disabilities: Analysing Individual Support Plans in the Netherlands

    ERIC Educational Resources Information Center

    Kamstra, Aafke; van der Putten, Annette A. J.; Vlaskamp, Carla

    2017-01-01

    Most people with profound intellectual and multiple disabilities (PIMD) have limited social contact and it is unclear what is done to maintain or increase these contacts. Individual support planning (ISP) can be used in the systematic enhancement of social contacts. This study analyses the content of ISPs with respect to the social contacts of…

  9. Interval Graph Limits

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2015-01-01

    We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368

  10. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding

    PubMed Central

    Bottle, Alex; Jarman, Brian; Aylin, Paul

    2011-01-01

    Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587

  11. The utility of atmospheric analyses for the mitigation of artifacts in InSAR

    USGS Publications Warehouse

    Foster, James; Kealy, John; Cherubini, Tiziana; Businger, S.; Lu, Zhong; Murphy, Michael

    2013-01-01

    The numerical weather models (NWMs) developed by the meteorological community are able to provide accurate analyses of the current state of the atmosphere in addition to the predictions of the future state. To date, most attempts to apply the NWMs to estimate the refractivity of the atmosphere at the time of satellite synthetic aperture radar (SAR) data acquisitions have relied on predictive models. We test the hypothesis that performing a final assimilative routine, ingesting all available meteorological observations for the times of SAR acquisitions, and generating customized analyses of the atmosphere at those times will better mitigate atmospheric artifacts in differential interferograms. We find that, for our study area around Mount St. Helens (Amboy, Washington, USA), this approach is unable to model the refractive changes and provides no mean benefit for interferogram analysis. The performance is improved slightly by ingesting atmospheric delay estimates derived from the limited local GPS network; however, the addition of water vapor products from the GOES satellites reduces the quality of the corrections. We interpret our results to indicate that, even with this advanced approach, NWMs are not a reliable mitigation technique for regions such as Mount St. Helens with highly variable moisture fields and complex topography and atmospheric dynamics. It is possible, however, that the addition of more spatially dense meteorological data to constrain the analyses might significantly improve the performance of weather modeling of atmospheric artifacts in satellite radar interferograms.

  12. Design and burn-up analyses of new type holder for silicon neutron transmutation doping.

    PubMed

    Komeda, Masao; Arai, Masaji; Tamai, Kazuo; Kawasaki, Kozo

    2016-07-01

    We have developed a new silicon irradiation holder with a neutron filter to increase the irradiation efficiency. The neutron filter is made of an alloy of aluminum and B4C particles. We fabricated a new holder based on the results of design analyses. This filter has limited use in applications requiring prolonged use due to a decrease in the amount of (10)B in B4C particles. We investigated the influence of (10)B reduction on doping distribution in a silicon ingot by using the Monte Carlo Code MVP. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Experiences of Structured Elicitation for Model-Based Cost-Effectiveness Analyses.

    PubMed

    Soares, Marta O; Sharples, Linda; Morton, Alec; Claxton, Karl; Bojke, Laura

    2018-06-01

    Empirical evidence supporting the cost-effectiveness estimates of particular health care technologies may be limited, or it may even be missing entirely. In these situations, additional information, often in the form of expert judgments, is needed to reach a decision. There are formal methods to quantify experts' beliefs, termed as structured expert elicitation (SEE), but only limited research is available in support of methodological choices. Perhaps as a consequence, the use of SEE in the context of cost-effectiveness modelling is limited. This article reviews applications of SEE in cost-effectiveness modelling with the aim of summarizing the basis for methodological choices made in each application and recording the difficulties and challenges reported by the authors in the design, conduct, and analyses. The methods used in each application were extracted along with the criteria used to support methodological and practical choices and any issues or challenges discussed in the text. Issues and challenges were extracted using an open field, and then categorised and grouped for reporting. The review demonstrates considerable heterogeneity in methods used, and authors acknowledge great methodological uncertainty in justifying their choices. Specificities of the context area emerging as potentially important in determining further methodological research in elicitation are between- expert variation and its interpretation, the fact that substantive experts in the area may not be trained in quantitative subjects, that judgments are often needed on various parameter types, the need for some form of assessment of validity, and the need for more integration with behavioural research to devise relevant debiasing strategies. This review of experiences of SEE highlights a number of specificities/constraints that can shape the development of guidance and target future research efforts in this area. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes

  14. Insights into the phylogeny of Northern Hemisphere Armillaria: Neighbor-net and Bayesian analyses of translation elongation factor 1-α gene sequences

    Treesearch

    Ned B. Klopfenstein; Jane E. Stewart; Yuko Ota; John W. Hanna; Bryce A. Richardson; Amy L. Ross-Davis; Ruben D. Elias-Roman; Kari Korhonen; Nenad Keca; Eugenia Iturritxa; Dionicio Alvarado-Rosales; Halvor Solheim; Nicholas J. Brazee; Piotr Lakomy; Michelle R. Cleary; Eri Hasegawa; Taisei Kikuchi; Fortunato Garza-Ocanas; Panaghiotis Tsopelas; Daniel Rigling; Simone Prospero; Tetyana Tsykun; Jean A. Berube; Franck O. P. Stefani; Saeideh Jafarpour; Vladimir Antonin; Michal Tomsovsky; Geral I. McDonald; Stephen Woodward; Mee-Sook Kim

    2017-01-01

    Armillaria possesses several intriguing characteristics that have inspired wide interest in understanding phylogenetic relationships within and among species of this genus. Nuclear ribosomal DNA sequence–based analyses of Armillaria provide only limited information for phylogenetic studies among widely divergent taxa. More recent studies have shown that translation...

  15. Meta-Analyses and Orthodontic Evidence-Based Clinical Practice in the 21st Century

    PubMed Central

    Papadopoulos, Moschos A.

    2010-01-01

    Introduction: Aim of this systematic review was to assess the orthodontic related issues which currently provide the best evidence as documented by meta-analyses, by critically evaluating and discussing the methodology used in these studies. Material and Methods: Several electronic databases were searched and handsearching was also performed in order to identify the corresponding meta-analyses investigating orthodontic related subjects. In total, 197 studies were retrieved initially. After applying specific inclusion and exclusion criteria, 27 articles were identified as meta-analyses treating orthodontic-related subjects. Results: Many of these 27 papers presented sufficient quality and followed appropriate meta-analytic approaches to quantitatively synthesize data and presented adequately supported evidence. However, the methodology used in some of them presented weaknesses, limitations or deficiencies. Consequently, the topics in orthodontics which currently provide the best evidence, include some issues related to Class II or Class III treatment, treatment of transverse problems, external apical root resorption, dental anomalies, such as congenital missing teeth and tooth transposition, frequency of severe occlusal problems, nickel hypersensitivity, obstructive sleep apnea syndrome, and computer-assisted learning in orthodontic education. Conclusions: Only a few orthodontic related issues have been so far investigated by means of MAs. In addition, for some of these issues investigated in the corresponding MAs no definite conclusions could be drawn, due to significant methodological deficiencies of these studies. According to this investigation, it can be concluded that at the begin of the 21st century there is evidence for only a few orthodontic related issues as documented by meta-analyses, and more well-conducted high quality research studies are needed to produce strong evidence in order to support evidence-based clinical practice in orthodontics. PMID

  16. Daily home gardening improved survival for older people with mobility limitations: an 11-year follow-up study in Taiwan.

    PubMed

    Lêng, Chhian Hūi; Wang, Jung-Der

    2016-01-01

    To test the hypothesis that gardening is beneficial for survival after taking time-dependent comorbidities, mobility, and depression into account in a longitudinal middle-aged (50-64 years) and older (≥65 years) cohort in Taiwan. The cohort contained 5,058 nationally sampled adults ≥50 years old from the Taiwan Longitudinal Study on Aging (1996-2007). Gardening was defined as growing flowers, gardening, or cultivating potted plants for pleasure with five different frequencies. We calculated hazard ratios for the mortality risks of gardening and adjusted the analysis for socioeconomic status, health behaviors and conditions, depression, mobility limitations, and comorbidities. Survival models also examined time-dependent effects and risks in each stratum contingent upon baseline mobility and depression. Sensitivity analyses used imputation methods for missing values. Daily home gardening was associated with a high survival rate (hazard ratio: 0.82; 95% confidence interval: 0.71-0.94). The benefits were robust for those with mobility limitations, but without depression at baseline (hazard ratio: 0.64, 95% confidence interval: 0.48-0.87) when adjusted for time-dependent comorbidities, mobility limitations, and depression. Chronic or relapsed depression weakened the protection of gardening. For those without mobility limitations and not depressed at baseline, gardening had no effect. Sensitivity analyses using different imputation methods yielded similar results and corroborated the hypothesis. Daily gardening for pleasure was associated with reduced mortality for Taiwanese >50 years old with mobility limitations but without depression.

  17. Central nervous system medication use and incident mobility limitation in community elders: the Health, Aging, and Body Composition study.

    PubMed

    Boudreau, Robert M; Hanlon, Joseph T; Roumani, Yazan F; Studenski, Stephanie A; Ruby, Christine M; Wright, Rollin M; Hilmer, Sarah N; Shorr, Ronald I; Bauer, Douglas C; Simonsick, Eleanor M; Newman, Anne B

    2009-10-01

    To evaluate whether CNS medication use in older adults was associated with a higher risk of future incident mobility limitation. This 5-year longitudinal cohort study included 3055 participants from the health, aging and body composition (Health ABC) study who were well-functioning at baseline. CNS medication use (benzodiazepine and opioid receptor agonists, antipsychotics, and antidepressants) was determined yearly (except year 4) during in-home or in-clinic interviews. Summated standardized daily doses (low, medium, and high) and duration of CNS drug use were computed. Incident mobility limitation was operationalized as two consecutive self-reports of having any difficulty walking 1/4 mile or climbing 10 steps without resting every 6 months after baseline. Multivariable Cox proportional hazard analyses were conducted adjusting for demographics, health behaviors, health status, and common indications for CNS medications. Each year at least 13.9% of participants used a CNS medication. By year 6, overall 49% had developed incident mobility limitation. In multivariable models, CNS medication users compared to never users showed a higher risk for incident mobility limitation (adjusted hazard ratio (Adj. HR) 1.28; 95% confidence interval (CI) 1.12-1.47). Similar findings of increased risk were seen in analyses examining dose- and duration-response relationships. CNS medication use is independently associated with an increased risk of future incident mobility limitation in community dwelling elderly. Further studies are needed to determine the impact of reducing CNS medication exposure on mobility problems. 2009 John Wiley & Sons, Ltd.

  18. How sex- and age-disaggregated data and gender and generational analyses can improve humanitarian response.

    PubMed

    Mazurana, Dyan; Benelli, Prisca; Walker, Peter

    2013-07-01

    Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  20. Criteria for the assessment of analyser practicability

    PubMed Central

    Biosca, C.; Galimany, R.

    1993-01-01

    This article lists the theoretical criteria that need to be considered to assess the practicability of an automatic analyser. Two essential sets of criteria should be taken into account when selecting an automatic analyser: ‘reliability’ and ‘practicability’. Practibility covers the features that provide information about the suitability of an analyser for specific working conditions. These practibility criteria are classsified in this article and include the environment; work organization; versatility and flexibility; safely controls; staff training; maintenance and operational costs. PMID:18924972