Sample records for level set method

  1. A simple mass-conserved level set method for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  2. Development of a coupled level set and immersed boundary method for predicting dam break flows

    NASA Astrophysics Data System (ADS)

    Yu, C. H.; Sheu, Tony W. H.

    2017-12-01

    Dam-break flow over an immersed stationary object is investigated using a coupled level set (LS)/immersed boundary (IB) method developed in Cartesian grids. This approach adopts an improved interface preserving level set method which includes three solution steps and the differential-based interpolation immersed boundary method to treat fluid-fluid and solid-fluid interfaces, respectively. In the first step of this level set method, the level set function ϕ is advected by a pure advection equation. The intermediate step is performed to obtain a new level set value through a new smoothed Heaviside function. In the final solution step, a mass correction term is added to the re-initialization equation to ensure the new level set is a distance function and to conserve the mass bounded by the interface. For accurately calculating the level set value, the four-point upwinding combined compact difference (UCCD) scheme with three-point boundary combined compact difference scheme is applied to approximate the first-order derivative term shown in the level set equation. For the immersed boundary method, application of the artificial momentum forcing term at points in cells consisting of both fluid and solid allows an imposition of velocity condition to account for the presence of solid object. The incompressible Navier-Stokes solutions are calculated using the projection method. Numerical results show that the coupled LS/IB method can not only predict interface accurately but also preserve the mass conservation excellently for the dam-break flow.

  3. An arbitrary-order Runge–Kutta discontinuous Galerkin approach to reinitialization for banded conservative level sets

    DOE PAGES

    Jibben, Zechariah Joel; Herrmann, Marcus

    2017-08-24

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  4. High-Order Discontinuous Galerkin Level Set Method for Interface Tracking and Re-Distancing on Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Greene, Patrick; Nourgaliev, Robert; Schofield, Sam

    2015-11-01

    A new sharp high-order interface tracking method for multi-material flow problems on unstructured meshes is presented. The method combines the marker-tracking algorithm with a discontinuous Galerkin (DG) level set method to implicitly track interfaces. DG projection is used to provide a mapping from the Lagrangian marker field to the Eulerian level set field. For the level set re-distancing, we developed a novel marching method that takes advantage of the unique features of the DG representation of the level set. The method efficiently marches outward from the zero level set with values in the new cells being computed solely from cell neighbors. Results are presented for a number of different interface geometries including ones with sharp corners and multiple hierarchical level sets. The method can robustly handle the level set discontinuities without explicit utilization of solution limiters. Results show that the expected high order (3rd and higher) of convergence for the DG representation of the level set is obtained for smooth solutions on unstructured meshes. High-order re-distancing on irregular meshes is a must for applications were the interfacial curvature is important for underlying physics, such as surface tension, wetting and detonation shock dynamics. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Information management release number LLNL-ABS-675636.

  5. Evaluation and comparison of predictive individual-level general surrogates.

    PubMed

    Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth

    2018-07-01

    An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibben, Zechariah Joel; Herrmann, Marcus

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  7. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  8. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  9. 3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement

    DOE PAGES

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    2017-03-02

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  10. Robust and fast-converging level set method for side-scan sonar image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Li, Qingwu; Huo, Guanying

    2017-11-01

    A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.

  11. A review of level-set methods and some recent applications

    NASA Astrophysics Data System (ADS)

    Gibou, Frederic; Fedkiw, Ronald; Osher, Stanley

    2018-01-01

    We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.

  12. A level set method for multiple sclerosis lesion segmentation.

    PubMed

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Efficient level set methods for constructing wavefronts in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Tien

    2007-10-01

    Wavefront construction in geometrical optics has long faced the twin difficulties of dealing with multi-valued forms and resolution of wavefront surfaces. A recent change in viewpoint, however, has demonstrated that working in phase space on bicharacteristic strips using eulerian methods can bypass both difficulties. The level set method for interface dynamics makes a suitable choice for the eulerian method. Unfortunately, in three-dimensional space, the setting of interest for most practical applications, the advantages of this method are largely offset by a new problem: the high dimension of phase space. In this work, we present new types of level set algorithms that remove this obstacle and demonstrate their abilities to accurately construct wavefronts under high resolution. These results propel the level set method forward significantly as a competitive approach in geometrical optics under realistic conditions.

  14. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  15. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  16. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    NASA Astrophysics Data System (ADS)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  17. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  19. Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Islam, A.; Wheeler, M.

    2016-12-01

    Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.

  20. A discontinuous Galerkin conservative level set scheme for interface capturing in multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owkes, Mark, E-mail: mfc86@cornell.edu; Desjardins, Olivier

    2013-09-15

    The accurate conservative level set (ACLS) method of Desjardins et al. [O. Desjardins, V. Moureau, H. Pitsch, An accurate conservative level set/ghost fluid method for simulating turbulent atomization, J. Comput. Phys. 227 (18) (2008) 8395–8416] is extended by using a discontinuous Galerkin (DG) discretization. DG allows for the scheme to have an arbitrarily high order of accuracy with the smallest possible computational stencil resulting in an accurate method with good parallel scaling. This work includes a DG implementation of the level set transport equation, which moves the level set with the flow field velocity, and a DG implementation of themore » reinitialization equation, which is used to maintain the shape of the level set profile to promote good mass conservation. A near second order converging interface curvature is obtained by following a height function methodology (common amongst volume of fluid schemes) in the context of the conservative level set. Various numerical experiments are conducted to test the properties of the method and show excellent results, even on coarse meshes. The tests include Zalesak’s disk, two-dimensional deformation of a circle, time evolution of a standing wave, and a study of the Kelvin–Helmholtz instability. Finally, this novel methodology is employed to simulate the break-up of a turbulent liquid jet.« less

  1. Multi person detection and tracking based on hierarchical level-set method

    NASA Astrophysics Data System (ADS)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  2. Using the level set method in slab detachment modeling

    NASA Astrophysics Data System (ADS)

    Hillebrand, B.; Geenen, T.; Spakman, W.; van den Berg, A. P.

    2012-04-01

    Slab detachment plays an important role in the dynamics of several regions in the world such as the Mediterranean-Carpathian region and the Anatolia-Aegean Region. It is therefore important to gain better insights in the various aspects of this process by further modeling of this phenomenon. In this study we model slab detachment using a visco-plastic composite rheology consisting of diffusion, dislocation and Peierls creep. In order to gain more control over this visco-plastic composite rheology, as well as some deterministic advantages, the models presented in this study make use of the level set method (Osher and Sethian J. Comp. Phys., 1988). The level set method is a computational method to track interfaces. It works by creating a signed distance function which is zero at the interface of interest which is then advected by the flow field. This does not only allow one to track the interface but also to determine on which side of the interface a certain point is located since the level set function is determined in the entire domain and not just on the interface. The level set method is used in a wide variety of scientific fields including geophysics. In this study we use the level set method to keep track of the interface between the slab and the mantle. This allows us to determine more precisely the moment and depth of slab detachment. It also allows us to clearly distinguish the mantle from the slab and have therefore more control over their different rheologies. We focus on the role of Peierls creep in the slab detachment process and on the use of the level set method in modeling this process.

  3. Left ventricle segmentation via two-layer level sets with circular shape constraint.

    PubMed

    Yang, Cong; Wu, Weiguo; Su, Yuanqi; Zhang, Shaoxiang

    2017-05-01

    This paper proposes a circular shape constraint and a novel two-layer level set method for the segmentation of the left ventricle (LV) from short-axis magnetic resonance images without training any shape models. Since the shape of LV throughout the apex-base axis is close to a ring shape, we propose a circle fitting term in the level set framework to detect the endocardium. The circle fitting term imposes a penalty on the evolving contour from its fitting circle, and thereby handles quite well with issues in LV segmentation, especially the presence of outflow track in basal slices and the intensity overlap between TPM and the myocardium. To extract the whole myocardium, the circle fitting term is incorporated into two-layer level set method. The endocardium and epicardium are respectively represented by two specified level contours of the level set function, which are evolved by an edge-based and a region-based active contour model. The proposed method has been quantitatively validated on the public data set from MICCAI 2009 challenge on the LV segmentation. Experimental results and comparisons with state-of-the-art demonstrate the accuracy and robustness of our method. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Modeling Primary Breakup: A Three-Dimensional Eulerian Level Set/Vortex Sheet Method for Two-Phase Interface Dynamics

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    This paper is divided into four parts. First, the level set/vortex sheet method for three-dimensional two-phase interface dynamics is presented. Second, the LSS model for the primary breakup of turbulent liquid jets and sheets is outlined and all terms requiring subgrid modeling are identified. Then, preliminary three-dimensional results of the level set/vortex sheet method are presented and discussed. Finally, conclusions are drawn and an outlook to future work is given.

  5. A level-set method for two-phase flows with moving contact line and insoluble surfactant

    NASA Astrophysics Data System (ADS)

    Xu, Jian-Jun; Ren, Weiqing

    2014-04-01

    A level-set method for two-phase flows with moving contact line and insoluble surfactant is presented. The mathematical model consists of the Navier-Stokes equation for the flow field, a convection-diffusion equation for the surfactant concentration, together with the Navier boundary condition and a condition for the dynamic contact angle derived by Ren et al. (2010) [37]. The numerical method is based on the level-set continuum surface force method for two-phase flows with surfactant developed by Xu et al. (2012) [54] with some cautious treatment for the boundary conditions. The numerical method consists of three components: a flow solver for the velocity field, a solver for the surfactant concentration, and a solver for the level-set function. In the flow solver, the surface force is dealt with using the continuum surface force model. The unbalanced Young stress at the moving contact line is incorporated into the Navier boundary condition. A convergence study of the numerical method and a parametric study are presented. The influence of surfactant on the dynamics of the moving contact line is illustrated using examples. The capability of the level-set method to handle complex geometries is demonstrated by simulating a pendant drop detaching from a wall under gravity.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Hongzhuan; Lu, Zhiming; Vesselinov, Velimir Valentinov

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  7. Level set methods for detonation shock dynamics using high-order finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Grogan, F. C.; Kolev, T. V.

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two-more » and three-dimensional benchmark problems as well as applications to DSD.« less

  8. Stabilised finite-element methods for solving the level set equation with mass conservation

    NASA Astrophysics Data System (ADS)

    Kabirou Touré, Mamadou; Fahsi, Adil; Soulaïmani, Azzeddine

    2016-01-01

    Finite-element methods are studied for solving moving interface flow problems using the level set approach and a stabilised variational formulation proposed in Touré and Soulaïmani (2012; Touré and Soulaïmani To appear in 2016), coupled with a level set correction method. The level set correction is intended to enhance the mass conservation satisfaction property. The stabilised variational formulation (Touré and Soulaïmani 2012; Touré and Soulaïmani, To appear in 2016) constrains the level set function to remain close to the signed distance function, while the mass conservation is a correction step which enforces the mass balance. The eXtended finite-element method (XFEM) is used to take into account the discontinuities of the properties within an element. XFEM is applied to solve the Navier-Stokes equations for two-phase flows. The numerical methods are numerically evaluated on several test cases such as time-reversed vortex flow, a rigid-body rotation of Zalesak's disc, sloshing flow in a tank, a dam-break over a bed, and a rising bubble subjected to buoyancy. The numerical results show the importance of satisfying global mass conservation to accurately capture the interface position.

  9. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  10. Level-Set Topology Optimization with Aeroelastic Constraints

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  11. Lung segmentation from HRCT using united geometric active contours

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  12. Online monitoring of oil film using electrical capacitance tomography and level set method.

    PubMed

    Xue, Q; Sun, B Y; Cui, Z Q; Ma, M; Wang, H X

    2015-08-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.

  13. Online monitoring of oil film using electrical capacitance tomography and level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Q., E-mail: xueqian@tju.edu.cn; Ma, M.; Sun, B. Y.

    2015-08-15

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for onlinemore » monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.« less

  14. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  15. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  16. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  17. Graphical Methods for Reducing, Visualizing and Analyzing Large Data Sets Using Hierarchical Terminologies

    PubMed Central

    Jing, Xia; Cimino, James J.

    2011-01-01

    Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119

  18. Simulation of Heterogeneous Atom Probe Tip Shapes Evolution during Field Evaporation Using a Level Set Method and Different Evaporation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Li, Dongsheng; Xu, Wei

    2015-04-01

    In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results ofmore » the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.« less

  19. Demons versus Level-Set motion registration for coronary 18F-sodium fluoride PET.

    PubMed

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R; Fletcher, Alison; Motwani, Manish; Thomson, Louise E; Germano, Guido; Dey, Damini; Berman, Daniel S; Newby, David E; Slomka, Piotr J

    2016-02-27

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18 F-sodium fluoride ( 18 F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18 F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18 F-NaF PET. To this end, fifteen patients underwent 18 F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18 F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  20. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    NASA Astrophysics Data System (ADS)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  1. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  2. Improvements to Level Set, Immersed Boundary methods for Interface Tracking

    NASA Astrophysics Data System (ADS)

    Vogl, Chris; Leveque, Randy

    2014-11-01

    It is not uncommon to find oneself solving a moving boundary problem under flow in the context of some application. Of particular interest is when the moving boundary exerts a curvature-dependent force on the liquid. Such a force arises when observing a boundary that is resistant to bending or has surface tension. Numerically speaking, stable numerical computation of the curvature can be difficult as it is often described in terms of high-order derivatives of either marker particle positions or of a level set function. To address this issue, the level set method is modified to track not only the position of the boundary, but the curvature as well. The definition of the signed-distance function that is used to modify the level set method is also used to develop an interpolation-free, closest-point method. These improvements are used to simulate a bending-resistant, inextensible boundary under shear flow to highlight area and volume conservation, as well as stable curvature calculation. Funded by a NSF MSPRF grant.

  3. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  4. Advanced two-layer level set with a soft distance constraint for dual surfaces segmentation in medical images

    NASA Astrophysics Data System (ADS)

    Ji, Yuanbo; van der Geest, Rob J.; Nazarian, Saman; Lelieveldt, Boudewijn P. F.; Tao, Qian

    2018-03-01

    Anatomical objects in medical images very often have dual contours or surfaces that are highly correlated. Manually segmenting both of them by following local image details is tedious and subjective. In this study, we proposed a two-layer region-based level set method with a soft distance constraint, which not only regularizes the level set evolution at two levels, but also imposes prior information on wall thickness in an effective manner. By updating the level set function and distance constraint functions alternatingly, the method simultaneously optimizes both contours while regularizing their distance. The method was applied to segment the inner and outer wall of both left atrium (LA) and left ventricle (LV) from MR images, using a rough initialization from inside the blood pool. Compared to manual annotation from experience observers, the proposed method achieved an average perpendicular distance (APD) of less than 1mm for the LA segmentation, and less than 1.5mm for the LV segmentation, at both inner and outer contours. The method can be used as a practical tool for fast and accurate dual wall annotations given proper initialization.

  5. A level set-based topology optimization method for simultaneous design of elastic structure and coupled acoustic cavity using a two-phase material model

    NASA Astrophysics Data System (ADS)

    Noguchi, Yuki; Yamamoto, Takashi; Yamada, Takayuki; Izui, Kazuhiro; Nishiwaki, Shinji

    2017-09-01

    This papers proposes a level set-based topology optimization method for the simultaneous design of acoustic and structural material distributions. In this study, we develop a two-phase material model that is a mixture of an elastic material and acoustic medium, to represent an elastic structure and an acoustic cavity by controlling a volume fraction parameter. In the proposed model, boundary conditions at the two-phase material boundaries are satisfied naturally, avoiding the need to express these boundaries explicitly. We formulate a topology optimization problem to minimize the sound pressure level using this two-phase material model and a level set-based method that obtains topologies free from grayscales. The topological derivative of the objective functional is approximately derived using a variational approach and the adjoint variable method and is utilized to update the level set function via a time evolutionary reaction-diffusion equation. Several numerical examples present optimal acoustic and structural topologies that minimize the sound pressure generated from a vibrating elastic structure.

  6. A level set method for cupping artifact correction in cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shipeng; Li, Haibo; Ge, Qi

    2015-08-15

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less

  7. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  8. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  9. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    PubMed

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  10. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    NASA Astrophysics Data System (ADS)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  11. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  12. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  13. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  14. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  15. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  16. An efficient mass-preserving interface-correction level set/ghost fluid method for droplet suspensions under depletion forces

    NASA Astrophysics Data System (ADS)

    Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca

    2018-01-01

    Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.

  17. Hippocampus segmentation using locally weighted prior based level set

    NASA Astrophysics Data System (ADS)

    Achuthan, Anusha; Rajeswari, Mandava

    2015-12-01

    Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.

  18. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  19. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  20. Modeling wildland fire propagation with level set methods

    Treesearch

    V. Mallet; D.E Keyes; F.E. Fendell

    2009-01-01

    Level set methods are versatile and extensible techniques for general front tracking problems, including the practically important problem of predicting the advance of a fire front across expanses of surface vegetation. Given a rule, empirical or otherwise, to specify the rate of advance of an infinitesimal segment of fire front arc normal to itself (i.e., given the...

  1. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    PubMed

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  2. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  3. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  4. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    PubMed Central

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  5. From papers to practices: district level priority setting processes and criteria for family planning, maternal, newborn and child health interventions in Tanzania.

    PubMed

    Chitama, Dereck; Baltussen, Rob; Ketting, Evert; Kamazima, Switbert; Nswilla, Anna; Mujinja, Phares G M

    2011-10-21

    Successful priority setting is increasingly known to be an important aspect in achieving better family planning, maternal, newborn and child health (FMNCH) outcomes in developing countries. However, far too little attention has been paid to capturing and analysing the priority setting processes and criteria for FMNCH at district level. This paper seeks to capture and analyse the priority setting processes and criteria for FMNCH at district level in Tanzania. Specifically, we assess the FMNCH actor's engagement and understanding, the criteria used in decision making and the way criteria are identified, the information or evidence and tools used to prioritize FMNCH interventions at district level in Tanzania. We conducted an exploratory study mixing both qualitative and quantitative methods to capture and analyse the priority setting for FMNCH at district level, and identify the criteria for priority setting. We purposively sampled the participants to be included in the study. We collected the data using the nominal group technique (NGT), in-depth interviews (IDIs) with key informants and documentary review. We analysed the collected data using both content analysis for qualitative data and correlation analysis for quantitative data. We found a number of shortfalls in the district's priority setting processes and criteria which may lead to inefficient and unfair priority setting decisions in FMNCH. In addition, participants identified the priority setting criteria and established the perceived relative importance of the identified criteria. However, we noted differences exist in judging the relative importance attached to the criteria by different stakeholders in the districts. In Tanzania, FMNCH contents in both general development policies and sector policies are well articulated. However, the current priority setting process for FMNCH at district levels are wanting in several aspects rendering the priority setting process for FMNCH inefficient and unfair (or unsuccessful). To improve district level priority setting process for the FMNCH interventions, we recommend a fundamental revision of the current FMNCH interventions priority setting process. The improvement strategy should utilize rigorous research methods combining both normative and empirical methods to further analyze and correct past problems at the same time use the good practices to improve the current priority setting process for FMNCH interventions. The suggested improvements might give room for efficient and fair (or successful) priority setting process for FMNCH interventions.

  6. Mixed method evaluation of a community-based physical activity program using the RE-AIM framework: practical application in a real-world setting.

    PubMed

    Koorts, Harriet; Gillison, Fiona

    2015-11-06

    Communities are a pivotal setting in which to promote increases in child and adolescent physical activity behaviours. Interventions implemented in these settings require effective evaluation to facilitate translation of findings to wider settings. The aims of this paper are to i) present findings from a RE-AIM evaluation of a community-based physical activity program, and ii) review the methodological challenges faced when applying RE-AIM in practice. A single mixed-methods case study was conducted based on a concurrent triangulation design. Five sources of data were collected via interviews, questionnaires, archival records, documentation and field notes. Evidence was triangulated within RE-AIM to assess individual and organisational-level program outcomes. Inconsistent availability of data and a lack of robust reporting challenged assessment of all five dimensions. Reach, Implementation and setting-level Adoption were less successful, Effectiveness and Maintenance at an individual and organisational level were moderately successful. Only community-level Adoption was highly successful, reflecting the key program goal to provide community-wide participation in sport and physical activity. This research highlighted important methodological constraints associated with the use of RE-AIM in practice settings. Future evaluators wishing to use RE-AIM may benefit from a mixed-method triangulation approach to offset challenges with data availability and reliability.

  7. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  8. Methods to Determine Recommended Feeder-Wide Advanced Inverter Settings for Improving Distribution System Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rylander, Matthew; Reno, Matthew J.; Quiroz, Jimmy E.

    This paper describes methods that a distribution engineer could use to determine advanced inverter settings to improve distribution system performance. These settings are for fixed power factor, volt-var, and volt-watt functionality. Depending on the level of detail that is desired, different methods are proposed to determine single settings applicable for all advanced inverters on a feeder or unique settings for each individual inverter. Seven distinctly different utility distribution feeders are analyzed to simulate the potential benefit in terms of hosting capacity, system losses, and reactive power attained with each method to determine the advanced inverter settings.

  9. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  10. A level-set procedure for the design of electromagnetic metamaterials.

    PubMed

    Zhou, Shiwei; Li, Wei; Sun, Guangyong; Li, Qing

    2010-03-29

    Achieving negative permittivity and negative permeability signifies a key topic of research in the design of metamaterials. This paper introduces a level-set based topology optimization method, in which the interface between the vacuum and metal phases is implicitly expressed by the zero-level contour of a higher dimensional level-set function. Following a sensitivity analysis, the optimization maximizes the objective based on the normal direction of the level-set function and induced current flow, thereby generating the desirable patterns of current flow on metal surface. As a benchmark example, the U-shaped structure and its variations are obtained from the level-set topology optimization. Numerical examples demonstrate that both negative permittivity and negative permeability can be attained.

  11. The Thick Level-Set model for dynamic fragmentation

    DOE PAGES

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    2017-01-04

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  12. Segmentation of heterogeneous blob objects through voting and level set formulation

    PubMed Central

    Chang, Hang; Yang, Qing; Parvin, Bahram

    2009-01-01

    Blob-like structures occur often in nature, where they aid in cueing and the pre-attentive process. These structures often overlap, form perceptual boundaries, and are heterogeneous in shape, size, and intensity. In this paper, voting, Voronoi tessellation, and level set methods are combined to delineate blob-like structures. Voting and subsequent Voronoi tessellation provide the initial condition and the boundary constraints for each blob, while curve evolution through level set formulation provides refined segmentation of each blob within the Voronoi region. The paper concludes with the application of the proposed method to a dataset produced from cell based fluorescence assays and stellar data. PMID:19774202

  13. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  14. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  15. Integrating Compact Constraint and Distance Regularization with Level Set for Hepatocellular Carcinoma (HCC) Segmentation on Computed Tomography (CT) Images

    NASA Astrophysics Data System (ADS)

    Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping

    2017-01-01

    This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.

  16. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  17. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges.

    PubMed

    Li, Xu; Li, Chunming; Fedorov, Andriy; Kapur, Tina; Yang, Xiaoping

    2016-06-01

    In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.

  18. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  19. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  20. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges

    PubMed Central

    Li, Xu; Li, Chunming; Fedorov, Andriy; Kapur, Tina; Yang, Xiaoping

    2016-01-01

    Purpose: In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. Methods: Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors’ method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. Results: The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors’ model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors’ method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. Conclusions: As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate. PMID:27277056

  1. Improved score statistics for meta-analysis in single-variant and gene-level association studies.

    PubMed

    Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo

    2018-06-01

    Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.

  2. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    NASA Astrophysics Data System (ADS)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  3. Cross-cultural dataset for the evolution of religion and morality project.

    PubMed

    Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph

    2016-11-08

    A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.

  4. Cross-cultural dataset for the evolution of religion and morality project

    PubMed Central

    Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D.; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K.; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph

    2016-01-01

    A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set. PMID:27824332

  5. Computer-aided detection of bladder masses in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Samala, Ravi K.

    2017-03-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced and the non-contrastenhanced regions of the bladder individually. In this study, we investigated methods for detection of bladder masses within the entire bladder. The bladder was segmented using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates in a prescreening step. The candidates were mapped back to the 3D CT volume and segmented using our auto-initialized cascaded level set (AI-CALS) segmentation method. Twenty-seven morphological features were extracted for each candidate. A data set of 57 patients with 71 biopsy-proven bladder lesions was used, which was split into independent training and test sets: 42 training cases with 52 lesions, and 15 test cases with 19 lesions. Using the training set, feature selection was performed and a linear discriminant (LDA) classifier was designed to merge the selected features for classification of bladder lesions and false positives. The trained classifier was evaluated with the test set. FROC analysis showed that the system achieved a sensitivity of 86.5% at 3.3 FPs/case for the training set, and 84.2% at 3.7 FPs/case for the test set.

  6. Combining multiple tools outperforms individual methods in gene set enrichment analyses.

    PubMed

    Alhamdoosh, Monther; Ng, Milica; Wilson, Nicholas J; Sheridan, Julie M; Huynh, Huy; Wilson, Michael J; Ritchie, Matthew E

    2017-02-01

    Gene set enrichment (GSE) analysis allows researchers to efficiently extract biological insight from long lists of differentially expressed genes by interrogating them at a systems level. In recent years, there has been a proliferation of GSE analysis methods and hence it has become increasingly difficult for researchers to select an optimal GSE tool based on their particular dataset. Moreover, the majority of GSE analysis methods do not allow researchers to simultaneously compare gene set level results between multiple experimental conditions. The ensemble of genes set enrichment analyses (EGSEA) is a method developed for RNA-sequencing data that combines results from twelve algorithms and calculates collective gene set scores to improve the biological relevance of the highest ranked gene sets. EGSEA's gene set database contains around 25 000 gene sets from sixteen collections. It has multiple visualization capabilities that allow researchers to view gene sets at various levels of granularity. EGSEA has been tested on simulated data and on a number of human and mouse datasets and, based on biologists' feedback, consistently outperforms the individual tools that have been combined. Our evaluation demonstrates the superiority of the ensemble approach for GSE analysis, and its utility to effectively and efficiently extrapolate biological functions and potential involvement in disease processes from lists of differentially regulated genes. EGSEA is available as an R package at http://www.bioconductor.org/packages/EGSEA/ . The gene sets collections are available in the R package EGSEAdata from http://www.bioconductor.org/packages/EGSEAdata/ . monther.alhamdoosh@csl.com.au mritchie@wehi.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  8. Segmentation of mouse dynamic PET images using a multiphase level set method

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2010-11-01

    Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.

  9. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  10. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    PubMed

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  11. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.

  12. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  13. Evaluation of Model Specification, Variable Selection, and Adjustment Methods in Relation to Propensity Scores and Prognostic Scores in Multilevel Data

    ERIC Educational Resources Information Center

    Yu, Bing; Hong, Guanglei

    2012-01-01

    This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…

  14. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  15. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  16. Skull defect reconstruction based on a new hybrid level set.

    PubMed

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  17. Data graphing methods, articles of manufacture, and computing devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.

    Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arrangingmore » the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.« less

  18. Segmentation and texture analysis of structural biomarkers using neighborhood-clustering-based level set in MRI of the schizophrenic brain.

    PubMed

    Latha, Manohar; Kavitha, Ganesan

    2018-02-03

    Schizophrenia (SZ) is a psychiatric disorder that especially affects individuals during their adolescence. There is a need to study the subanatomical regions of SZ brain on magnetic resonance images (MRI) based on morphometry. In this work, an attempt was made to analyze alterations in structure and texture patterns in images of the SZ brain using the level-set method and Laws texture features. T1-weighted MRI of the brain from Center of Biomedical Research Excellence (COBRE) database were considered for analysis. Segmentation was carried out using the level-set method. Geometrical and Laws texture features were extracted from the segmented brain stem, corpus callosum, cerebellum, and ventricle regions to analyze pattern changes in SZ. The level-set method segmented multiple brain regions, with higher similarity and correlation values compared with an optimized method. The geometric features obtained from regions of the corpus callosum and ventricle showed significant variation (p < 0.00001) between normal and SZ brain. Laws texture feature identified a heterogeneous appearance in the brain stem, corpus callosum and ventricular regions, and features from the brain stem were correlated with Positive and Negative Syndrome Scale (PANSS) score (p < 0.005). A framework of geometric and Laws texture features obtained from brain subregions can be used as a supplement for diagnosis of psychiatric disorders.

  19. A study on the application of topic models to motif finding algorithms.

    PubMed

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  20. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  1. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    PubMed

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  2. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method

    PubMed Central

    Chen, Meizhu; Li, Jichun; Zhang, Encai

    2017-01-01

    As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122

  3. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    NASA Astrophysics Data System (ADS)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  4. Priority of road maintenance management based on halda reading range on NAASRA method

    NASA Astrophysics Data System (ADS)

    Surbakti, M.; Doan, A.

    2018-02-01

    The road pavement, constantly experiencing stress-strain due to traffic load through it, can cause damage to the pavement. Therefore, early detection and repair of the damage will be able to prevent more severe damage that can develop into pavement failure. A road condition survey is one of the earliest attempts to detect the initial damage of a pavement. In this case the driving comfort is the most important part for the driver in assessing road conditions that are affected by the level of road surface roughness. To determine the level of roughness of the road, one of the methods developed is the measurement using the NAASRA method. In this method the roughness of the road is an accumulation of the average unevenness of the road, with the general setting on halda of 100 m. However, with this 100-meter setting, in some places the final value of the roughness value is too large or too small so that it will result in the priority of the road maintenance. This is what underlies roughness research by comparing halda settings at 50 m and 200 m different from the general settings above.This study uses the International Roughness Index (IRI) method in determining the level of road stability concerning driving discomfort. IRI score obtained from direct survey in field by using Roughometer-NAASRA.The final result shows that there is a significant difference between the reading of halda which is set at 100 m reading with halda set with 50 and 200 meter readings. This may lead to differences in handling priorities, which may impact on the sustainability of road network maintenance management (Sustainaible Road Management)

  5. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.

    PubMed

    Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo

    2017-01-01

    We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  6. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  7. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  8. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  9. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  10. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  11. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  12. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  13. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  14. Extending fields in a level set method by solving a biharmonic equation

    NASA Astrophysics Data System (ADS)

    Moroney, Timothy J.; Lusmore, Dylan R.; McCue, Scott W.; McElwain, D. L. Sean

    2017-08-01

    We present an approach for computing extensions of velocities or other fields in level set methods by solving a biharmonic equation. The approach differs from other commonly used approaches to velocity extension because it deals with the interface fully implicitly through the level set function. No explicit properties of the interface, such as its location or the velocity on the interface, are required in computing the extension. These features lead to a particularly simple implementation using either a sparse direct solver or a matrix-free conjugate gradient solver. Furthermore, we propose a fast Poisson preconditioner that can be used to accelerate the convergence of the latter. We demonstrate the biharmonic extension on a number of test problems that serve to illustrate its effectiveness at producing smooth and accurate extensions near interfaces. A further feature of the method is the natural way in which it deals with symmetry and periodicity, ensuring through its construction that the extension field also respects these symmetries.

  15. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardisty, M.; Gordon, L.; Agarwal, P.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less

  16. Topology optimization in acoustics and elasto-acoustics via a level-set method

    NASA Astrophysics Data System (ADS)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  17. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    PubMed

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05). BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05). Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  18. Judgmental Standard Setting Using a Cognitive Components Model.

    ERIC Educational Resources Information Center

    McGinty, Dixie; Neel, John H.

    A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…

  19. Client Assessment in an Industrial Setting: A Cross-Sectional Method.

    ERIC Educational Resources Information Center

    Yamatani, Hide

    1988-01-01

    Used cross-sectional method for evaluating social service programs in industrial setting to estimate numbers of workers needing social services, levels of program use, and penetration and to examine program outcome. Workers served by social service or employee assistance programs can be examined to determine additional services needed, adequacy of…

  20. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    PubMed

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A novel method for calculating the dynamic capillary force and correcting the pressure error in micro-tube experiment.

    PubMed

    Wang, Shuoliang; Liu, Pengcheng; Zhao, Hui; Zhang, Yuan

    2017-11-29

    Micro-tube experiment has been implemented to understand the mechanisms of governing microcosmic fluid percolation and is extensively used in both fields of micro electromechanical engineering and petroleum engineering. The measured pressure difference across the microtube is not equal to the actual pressure difference across the microtube. Taking into account the additional pressure losses between the outlet of the micro tube and the outlet of the entire setup, we propose a new method for predicting the dynamic capillary pressure using the Level-set method. We first demonstrate it is a reliable method for describing microscopic flow by comparing the micro-model flow-test results against the predicted results using the Level-set method. In the proposed approach, Level-set method is applied to predict the pressure distribution along the microtube when the fluids flow along the microtube at a given flow rate; the microtube used in the calculation has the same size as the one used in the experiment. From the simulation results, the pressure difference across a curved interface (i.e., dynamic capillary pressure) can be directly obtained. We also show that dynamic capillary force should be properly evaluated in the micro-tube experiment in order to obtain the actual pressure difference across the microtube.

  2. Battery control system for hybrid vehicle and method for controlling a hybrid vehicle battery

    DOEpatents

    Bockelmann, Thomas R [Battle Creek, MI; Beaty, Kevin D [Kalamazoo, MI; Zou, Zhanijang [Battle Creek, MI; Kang, Xiaosong [Battle Creek, MI

    2009-07-21

    A battery control system for controlling a state of charge of a hybrid vehicle battery includes a detecting arrangement for determining a vehicle operating state or an intended vehicle operating state and a controller for setting a target state of charge level of the battery based on the vehicle operating state or the intended vehicle operating state. The controller is operable to set a target state of charge level at a first level during a mobile vehicle operating state and at a second level during a stationary vehicle operating state or in anticipation of the vehicle operating in the stationary vehicle operating state. The invention further includes a method for controlling a state of charge of a hybrid vehicle battery.

  3. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  4. Genetic network inference as a series of discrimination tasks.

    PubMed

    Kimura, Shuhei; Nakayama, Satoshi; Hatakeyama, Mariko

    2009-04-01

    Genetic network inference methods based on sets of differential equations generally require a great deal of time, as the equations must be solved many times. To reduce the computational cost, researchers have proposed other methods for inferring genetic networks by solving sets of differential equations only a few times, or even without solving them at all. When we try to obtain reasonable network models using these methods, however, we must estimate the time derivatives of the gene expression levels with great precision. In this study, we propose a new method to overcome the drawbacks of inference methods based on sets of differential equations. Our method infers genetic networks by obtaining classifiers capable of predicting the signs of the derivatives of the gene expression levels. For this purpose, we defined a genetic network inference problem as a series of discrimination tasks, then solved the defined series of discrimination tasks with a linear programming machine. Our experimental results demonstrated that the proposed method is capable of correctly inferring genetic networks, and doing so more than 500 times faster than the other inference methods based on sets of differential equations. Next, we applied our method to actual expression data of the bacterial SOS DNA repair system. And finally, we demonstrated that our approach relates to the inference method based on the S-system model. Though our method provides no estimation of the kinetic parameters, it should be useful for researchers interested only in the network structure of a target system. Supplementary data are available at Bioinformatics online.

  5. A geometric level set model for ultrasounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarti, A.; Malladi, R.

    We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.

  6. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  7. Automatic classification of tissue malignancy for breast carcinoma diagnosis.

    PubMed

    Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo

    2018-05-01

    Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Analysis of pressure distortion testing

    NASA Technical Reports Server (NTRS)

    Koch, K. E.; Rees, R. L.

    1976-01-01

    The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.

  9. Application of level set method to optimal vibration control of plate structures

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Khajepour, A.; Esmailzadeh, E.

    2013-02-01

    Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.

  10. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    PubMed Central

    Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.

    2013-01-01

    In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592

  11. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.

    PubMed

    Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G

    2013-01-01

    In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.

  12. Level set formulation of two-dimensional Lagrangian vortex detection methods

    NASA Astrophysics Data System (ADS)

    Hadjighasem, Alireza; Haller, George

    2016-10-01

    We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.

  13. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  14. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    PubMed Central

    2012-01-01

    Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695

  15. SU-E-J-168: Automated Pancreas Segmentation Based On Dynamic MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gou, S; Rapacchi, S; Hu, P

    2014-06-01

    Purpose: MRI guided radiotherapy is particularly attractive for abdominal targets with low CT contrast. To fully utilize this modality for pancreas tracking, automated segmentation tools are needed. A hybrid gradient, region growth and shape constraint (hGReS) method to segment 2D upper abdominal dynamic MRI is developed for this purpose. Methods: 2D coronal dynamic MR images of 2 healthy volunteers were acquired with a frame rate of 5 f/second. The regions of interest (ROIs) included the liver, pancreas and stomach. The first frame was used as the source where the centers of the ROIs were annotated. These center locations were propagatedmore » to the next dynamic MRI frame. 4-neighborhood region transfer growth was performed from these initial seeds for rough segmentation. To improve the results, gradient, edge and shape constraints were applied to the ROIs before final refinement using morphological operations. Results from hGReS and 3 other automated segmentation methods using edge detection, region growth and level set were compared to manual contouring. Results: For the first patient, hGReS resulted in the organ segmentation accuracy as measure by the Dices index (0.77) for the pancreas. The accuracy was slightly superior to the level set method (0.72), and both are significantly more accurate than the edge detection (0.53) and region growth methods (0.42). For the second healthy volunteer, hGReS reliably segmented the pancreatic region, achieving a Dices index of 0.82, 0.92 and 0.93 for the pancreas, stomach and liver, respectively, comparing to manual segmentation. Motion trajectories derived from the hGReS, level set and manual segmentation methods showed high correlation to respiratory motion calculated using a lung blood vessel as the reference while the other two methods showed substantial motion tracking errors. hGReS was 10 times faster than level set. Conclusion: We have shown the feasibility of automated segmentation of the pancreas anatomy based on dynamic MRI.« less

  16. Applying Mondrian Cross-Conformal Prediction To Estimate Prediction Confidence on Large Imbalanced Bioactivity Data Sets.

    PubMed

    Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming

    2017-07-24

    Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.

  17. Adaptive Set-Based Methods for Association Testing

    PubMed Central

    Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo

    2017-01-01

    With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371

  18. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    NASA Astrophysics Data System (ADS)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  19. Chemometric analysis of soil pollution data using the Tucker N-way method.

    PubMed

    Stanimirova, I; Zehl, K; Massart, D L; Vander Heyden, Y; Einax, J W

    2006-06-01

    N-way methods, particularly the Tucker method, are often the methods of choice when analyzing data sets arranged in three- (or higher) way arrays, which is the case for most environmental data sets. In the future, applying N-way methods will become an increasingly popular way to uncover hidden information in complex data sets. The reason for this is that classical two-way approaches such as principal component analysis are not as good at revealing the complex relationships present in data sets. This study describes in detail the application of a chemometric N-way approach, namely the Tucker method, in order to evaluate the level of pollution in soil from a contaminated site. The analyzed soil data set was five-way in nature. The samples were collected at different depths (way 1) from two locations (way 2) and the levels of thirteen metals (way 3) were analyzed using a four-step-sequential extraction procedure (way 4), allowing detailed information to be obtained about the bioavailability and activity of the different binding forms of the metals. Furthermore, the measurements were performed under two conditions (way 5), inert and non-inert. The preferred Tucker model of definite complexity showed that there was no significant difference in measurements analyzed under inert or non-inert conditions. It also allowed two depth horizons, characterized by different accumulation pathways, to be distinguished, and it allowed the relationships between chemical elements and their biological activities and mobilities in the soil to be described in detail.

  20. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  1. Mesh-free based variational level set evolution for breast region segmentation and abnormality detection using mammograms.

    PubMed

    Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George

    2018-01-01

    Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Numerical Simulation of Dynamic Contact Angles and Contact Lines in Multiphase Flows using Level Set Method

    NASA Astrophysics Data System (ADS)

    Pendota, Premchand

    Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.

  3. Salmonella internalization in mung bean sprouts and pre- and postharvest intervention methods in a hydroponic system.

    PubMed

    Ge, Chongtao; Rymut, Susan; Lee, Cheonghoon; Lee, Jiyoung

    2014-05-01

    Mung bean sprouts, typically consumed raw or minimally cooked, are often contaminated with pathogens. Internalized pathogens pose a high risk because conventional sanitization methods are ineffective for their inactivation. The studies were performed (i) to understand the potential of internalization of Salmonella in mung bean sprouts under conditions where the irrigation water was contaminated and (ii) to determine if pre- and postharvest intervention methods are effective in inactivating the internalized pathogen. Mung bean sprouts were grown hydroponically and exposed to green fluorescence protein-tagged Salmonella Typhimurium through maturity. One experimental set received contaminated water daily, while other sets received contaminated water on a single day at different times. For preharvest intervention, irrigation water was exposed to UV, and for postharvest intervention-contaminated sprouts were subjected to a chlorine wash and UV light. Harvested samples were disinfected with ethanol and AgNO3 to differentiate surface-associate pathogens from the internalized ones. The internalized Salmonella Typhimurium in each set was quantified using the plate count method. Internalized Salmonella Typhimurium was detected at levels of 2.0 to 5.1 log CFU/g under all conditions. Continuous exposure to contaminated water during the entire period generated significantly higher levels of Salmonella Typhimurium internalization than sets receiving contaminated water for only a single day (P < 0.05). Preintervention methods lowered the level of internalized Salmonella by 1.84 log CFU/g (P < 0.05), whereas postintervention methods were ineffective in eliminating internalized pathogens. Preintervention did not completely inactivate bacteria in sprouts and demonstrated that the remaining Salmonella Typhimurium in water became more resistant to UV. Because postharvest intervention methods are ineffective, proper procedures for maintaining clean irrigation water must be followed throughout production in a hydroponic system.

  4. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  5. Application of short-data methods on extreme surge levels

    NASA Astrophysics Data System (ADS)

    Feng, X.

    2014-12-01

    Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.

  6. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  7. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  8. Mapping Topographic Structure in White Matter Pathways with Level Set Trees

    PubMed Central

    Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy

    2014-01-01

    Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673

  9. An Empirical Comparison of Five Linear Equating Methods for the NEAT Design

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…

  10. Variation in passing standards for graduation-level knowledge items at UK medical schools.

    PubMed

    Taylor, Celia A; Gurnell, Mark; Melville, Colin R; Kluth, David C; Johnson, Neil; Wass, Val

    2017-06-01

    Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common 'one from five' single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013-2014; 60 in 2014-2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Of 31 invited medical schools, 22 participated in 2013-2014 (71%) and 30 (97%) in 2014-2015. Schools used a mean of 49 and 53 common items in 2013-2014 and 2014-2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f 2 ): 0.041 in 2013-2014 and 0.218 in 2014-2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013-2014 and 6.5 percentage points in 2014-2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  11. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    PubMed

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  12. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    PubMed

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.

  13. Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2018-01-01

    Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  15. Standard Setting Lessons Learned in the South African Context: Implications for International Implementation

    ERIC Educational Resources Information Center

    Pitoniak, Mary J.; Yeld, Nan

    2013-01-01

    Criterion-referenced assessments have become more common around the world, with performance standards being set to differentiate different levels of student performance. However, use of standard setting methods developed in the United States may be complicated by factors related to the political and educational contexts within another country. In…

  16. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    ERIC Educational Resources Information Center

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  17. Automated defect spatial signature analysis for semiconductor manufacturing process

    DOEpatents

    Tobin, Jr., Kenneth W.; Gleason, Shaun S.; Karnowski, Thomas P.; Sari-Sarraf, Hamed

    1999-01-01

    An apparatus and method for performing automated defect spatial signature alysis on a data set representing defect coordinates and wafer processing information includes categorizing data from the data set into a plurality of high level categories, classifying the categorized data contained in each high level category into user-labeled signature events, and correlating the categorized, classified signature events to a present or incipient anomalous process condition.

  18. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    PubMed

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  19. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.

  20. Spectral gene set enrichment (SGSE).

    PubMed

    Frost, H Robert; Li, Zhigang; Moore, Jason H

    2015-03-03

    Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.

  1. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  2. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    PubMed

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  3. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    NASA Astrophysics Data System (ADS)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  4. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschamps, T; Schwartz, P; Trebotich, D

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less

  5. Distributional fold change test – a statistical approach for detecting differential expression in microarray experiments

    PubMed Central

    2012-01-01

    Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The distributional fold change test is an effective method for finding and ranking differentially expressed probesets on microarrays. The application of this test is advantageous to data sets using formalin-fixed paraffin-embedded samples or other systems where degradation effects diminish the applicability of correlation adjusted methods to the whole feature set. PMID:23122055

  6. Application Guide for Heat Recovery Incinerators.

    DTIC Science & Technology

    1986-02-01

    of the absorption cycle to vaporize the refrigerant, typically an aqueous ammonia . The refrigerant then follows the typical refrigeration cycle...this third level of iteration, the information gathered in level II should be updated if necessary and verified. Use the NCEL survey method (see...and quantity of the solid waste can be determined by applying procedures set forth in Appendix B. For level III, NCEL has developed a survey method

  7. Efficient hyperspectral image segmentation using geometric active contour formulation

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.

    2014-10-01

    In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

  8. Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety.

    PubMed

    Waterson, Patrick; Robertson, Michelle M; Cooke, Nancy J; Militello, Laura; Roth, Emilie; Stanton, Neville A

    2015-01-01

    An important part of the application of sociotechnical systems theory (STS) is the development of methods, tools and techniques to assess human factors and ergonomics workplace requirements. We focus in this paper on describing and evaluating current STS methods for workplace safety, as well as outlining a set of six case studies covering the application of these methods to a range of safety contexts. We also describe an evaluation of the methods in terms of ratings of their ability to address a set of theoretical and practical questions (e.g. the degree to which methods capture static/dynamic aspects of tasks and interactions between system levels). The outcomes from the evaluation highlight a set of gaps relating to the coverage and applicability of current methods for STS and safety (e.g. coverage of external influences on system functioning; method usability). The final sections of the paper describe a set of future challenges, as well as some practical suggestions for tackling these. We provide an up-to-date review of STS methods, a set of case studies illustrating their use and an evaluation of their strengths and weaknesses. The paper concludes with a 'roadmap' for future work.

  9. Accurate Methods for Large Molecular Systems (Preprint)

    DTIC Science & Technology

    2009-01-06

    tensor, EFP calculations are basis set dependent. The smallest recommended basis set is 6- 31++G( d , p )52 The dependence of the computational cost of...and second order perturbation theory (MP2) levels with the 6-31G( d , p ) basis set. Additional SFM tests are presented for a small set of alpha...helices using the 6-31++G( d , p ) basis set. The larger 6-311++G(3df,2p) basis set is employed for creating all EFPs used for non- bonded interactions, since

  10. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  11. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  12. Change detection of polarimetric SAR images based on the KummerU Distribution

    NASA Astrophysics Data System (ADS)

    Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping

    2014-11-01

    In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.

  13. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  14. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  15. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Neural modeling and functional neuroimaging.

    PubMed

    Horwitz, B; Sporns, O

    1994-01-01

    Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America. Copyright © 1994 Wiley-Liss, Inc.

  17. Radio frequency tank eigenmode sensor for propellant quantity gauging

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A. (Inventor)

    2013-01-01

    A method for measuring the quantity of fluid in a tank may include the steps of selecting a match between a measured set of electromagnetic eigenfrequencies and a simulated plurality of sets of electromagnetic eigenfrequencies using a matching algorithm, wherein the match is one simulated set of electromagnetic eigenfrequencies from the simulated plurality of sets of electromagnetic eigenfrequencies, and determining the fill level of the tank based upon the match.

  18. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    PubMed Central

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2016-01-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546

  19. A level set method for determining critical curvatures for drainage and imbibition.

    PubMed

    Prodanović, Masa; Bryant, Steven L

    2006-12-15

    An accurate description of the mechanics of pore level displacement of immiscible fluids could significantly improve the predictions from pore network models of capillary pressure-saturation curves, interfacial areas and relative permeability in real porous media. If we assume quasi-static displacement, at constant pressure and surface tension, pore scale interfaces are modeled as constant mean curvature surfaces, which are not easy to calculate. Moreover, the extremely irregular geometry of natural porous media makes it difficult to evaluate surface curvature values and corresponding geometric configurations of two fluids. Finally, accounting for the topological changes of the interface, such as splitting or merging, is nontrivial. We apply the level set method for tracking and propagating interfaces in order to robustly handle topological changes and to obtain geometrically correct interfaces. We describe a simple but robust model for determining critical curvatures for throat drainage and pore imbibition. The model is set up for quasi-static displacements but it nevertheless captures both reversible and irreversible behavior (Haines jump, pore body imbibition). The pore scale grain boundary conditions are extracted from model porous media and from imaged geometries in real rocks. The method gives quantitative agreement with measurements and with other theories and computational approaches.

  20. A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B.B.; Ertekin, R.C.; College of Shipbuilding Engineering, Harbin Engineering University, 150001 Harbin

    2015-02-15

    This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green–Naghdi (GN) equations and the Irrotational Green–Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green–Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at differentmore » levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.« less

  1. A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations

    NASA Astrophysics Data System (ADS)

    Zhao, B. B.; Ertekin, R. C.; Duan, W. Y.

    2015-02-01

    This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green-Naghdi (GN) equations and the Irrotational Green-Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green-Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at different levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.

  2. High-Resolution Regional Reanalysis in China: Evaluation of 1 Year Period Experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Pan, Yinong; Wang, Shuyu; Xu, Jianjun; Tang, Jianping

    2017-10-01

    Globally, reanalysis data sets are widely used in assessing climate change, validating numerical models, and understanding the interactions between the components of a climate system. However, due to the relatively coarse resolution, most global reanalysis data sets are not suitable to apply at the local and regional scales directly with the inadequate descriptions of mesoscale systems and climatic extreme incidents such as mesoscale convective systems, squall lines, tropical cyclones, regional droughts, and heat waves. In this study, by using a data assimilation system of Gridpoint Statistical Interpolation, and a mesoscale atmospheric model of Weather Research and Forecast model, we build a regional reanalysis system. This is preliminary and the first experimental attempt to construct a high-resolution reanalysis for China main land. Four regional test bed data sets are generated for year 2013 via three widely used methods (classical dynamical downscaling, spectral nudging, and data assimilation) and a hybrid method with data assimilation coupled with spectral nudging. Temperature at 2 m, precipitation, and upper level atmospheric variables are evaluated by comparing against observations for one-year-long tests. It can be concluded that the regional reanalysis with assimilation and nudging methods can better produce the atmospheric variables from surface to upper levels, and regional extreme events such as heat waves, than the classical dynamical downscaling. Compared to the ERA-Interim global reanalysis, the hybrid nudging method performs slightly better in reproducing upper level temperature and low-level moisture over China, which improves regional reanalysis data quality.

  3. Using the U.S. Geological Survey National Water Quality Laboratory LT-MDL to Evaluate and Analyze Data

    USGS Publications Warehouse

    Bonn, Bernadine A.

    2008-01-01

    A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.

  4. Gene regulatory network inference using fused LASSO on multiple data sets

    PubMed Central

    Omranian, Nooshin; Eloundou-Mbebi, Jeanne M. O.; Mueller-Roeber, Bernd; Nikoloski, Zoran

    2016-01-01

    Devising computational methods to accurately reconstruct gene regulatory networks given gene expression data is key to systems biology applications. Here we propose a method for reconstructing gene regulatory networks by simultaneous consideration of data sets from different perturbation experiments and corresponding controls. The method imposes three biologically meaningful constraints: (1) expression levels of each gene should be explained by the expression levels of a small number of transcription factor coding genes, (2) networks inferred from different data sets should be similar with respect to the type and number of regulatory interactions, and (3) relationships between genes which exhibit similar differential behavior over the considered perturbations should be favored. We demonstrate that these constraints can be transformed in a fused LASSO formulation for the proposed method. The comparative analysis on transcriptomics time-series data from prokaryotic species, Escherichia coli and Mycobacterium tuberculosis, as well as a eukaryotic species, mouse, demonstrated that the proposed method has the advantages of the most recent approaches for regulatory network inference, while obtaining better performance and assigning higher scores to the true regulatory links. The study indicates that the combination of sparse regression techniques with other biologically meaningful constraints is a promising framework for gene regulatory network reconstructions. PMID:26864687

  5. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  6. Benchmarking Hydrogen and Carbon NMR Chemical Shifts at HF, DFT, and MP2 Levels.

    PubMed

    Flaig, Denis; Maurer, Marina; Hanni, Matti; Braunger, Katharina; Kick, Leonhard; Thubauville, Matthias; Ochsenfeld, Christian

    2014-02-11

    An extensive study of error distributions for calculating hydrogen and carbon NMR chemical shifts at Hartree-Fock (HF), density functional theory (DFT), and Møller-Plesset second-order perturbation theory (MP2) levels is presented. Our investigation employs accurate CCSD(T)/cc-pVQZ calculations for providing reference data for 48 hydrogen and 40 carbon nuclei within an extended set of chemical compounds covering a broad range of the NMR scale with high relevance to chemical applications, especially in organic chemistry. Besides the approximations of HF, a variety of DFT functionals, and conventional MP2, we also present results with respect to a spin component-scaled MP2 (GIAO-SCS-MP2) approach. For each method, the accuracy is analyzed in detail for various basis sets, allowing identification of efficient combinations of method and basis set approximations.

  7. Measuring and Specifying Combinatorial Coverage of Test Input Configurations

    PubMed Central

    Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu

    2015-01-01

    A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442

  8. Using rewards and penalties to obtain desired subject performance

    NASA Technical Reports Server (NTRS)

    Cook, M.; Jex, H. R.; Stein, A. C.; Allen, R. W.

    1981-01-01

    Operant conditioning procedures, specifically the use of negative reinforcement, in achieving stable learning behavior is described. The critical tracking test (CTT) a method of detecting human operator impairment was tested. A pass level is set for each subject, based on that subject's asymptotic skill level while sober. It is critical that complete training take place before the individualized pass level is set in order that the impairment can be detected. The results provide a more general basis for the application of reward/penalty structures in manual control research.

  9. Computational studies of metal-metal and metal-ligand interactions

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled-pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. A detailed comparison of the properties of free CO is therefore given, at both the MCPF and CCSD/CCSD(T) levels of treatment, using a variety of basis sets. With very large one-particle basis sets, the SSCD(T) method gives excellent results for the bond distance, dipole moment and harmonic frequency of free CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used here and in a previous study. Calculations using larger basis sets reduced the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. The remaining discrepancy between the experimental and theoretical total binding energy of Cr(CO)6 is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive se (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  10. Electronic and spectroscopic characterizations of SNP isomers

    NASA Astrophysics Data System (ADS)

    Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.

    2018-02-01

    High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.

  11. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  12. Sulfate radical oxidation of aromatic contaminants: a detailed assessment of density functional theory and high-level quantum chemical methods.

    PubMed

    Pari, Sangavi; Wang, Inger A; Liu, Haizhou; Wong, Bryan M

    2017-03-22

    Advanced oxidation processes that utilize highly oxidative radicals are widely used in water reuse treatment. In recent years, the application of sulfate radical (SO 4 ˙ - ) as a promising oxidant for water treatment has gained increasing attention. To understand the efficiency of SO 4 ˙ - in the degradation of organic contaminants in wastewater effluent, it is important to be able to predict the reaction kinetics of various SO 4 ˙ - -driven oxidation reactions. In this study, we utilize density functional theory (DFT) and high-level wavefunction-based methods (including computationally-intensive coupled cluster methods), to explore the activation energies of SO 4 ˙ - -driven oxidation reactions on a series of benzene-derived contaminants. These high-level calculations encompass a wide set of reactions including 110 forward/reverse reactions and 5 different computational methods in total. Based on the high-level coupled-cluster quantum calculations, we find that the popular M06-2X DFT functional is significantly more accurate for OH - additions than for SO 4 ˙ - reactions. Most importantly, we highlight some of the limitations and deficiencies of other computational methods, and we recommend the use of high-level quantum calculations to spot-check environmental chemistry reactions that may lie outside the training set of the M06-2X functional, particularly for water oxidation reactions that involve SO 4 ˙ - and other inorganic species.

  13. Comparison of two methods of standard setting: the performance of the three-level Angoff method.

    PubMed

    Jalili, Mohammad; Hejri, Sara M; Norcini, John J

    2011-12-01

    Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.

  14. Simultaneous confidence sets for several effective doses.

    PubMed

    Tompsett, Daniel M; Biedermann, Stefanie; Liu, Wei

    2018-04-03

    Construction of simultaneous confidence sets for several effective doses currently relies on inverting the Scheffé type simultaneous confidence band, which is known to be conservative. We develop novel methodology to make the simultaneous coverage closer to its nominal level, for both two-sided and one-sided simultaneous confidence sets. Our approach is shown to be considerably less conservative than the current method, and is illustrated with an example on modeling the effect of smoking status and serum triglyceride level on the probability of the recurrence of a myocardial infarction. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Conventional and Explicitly Correlated ab Initio Benchmark Study on Water Clusters: Revision of the BEGDB and WATER27 Data Sets.

    PubMed

    Manna, Debashree; Kesharwani, Manoj K; Sylvetsky, Nitai; Martin, Jan M L

    2017-07-11

    Benchmark ab initio energies for BEGDB and WATER27 data sets have been re-examined at the MP2 and CCSD(T) levels with both conventional and explicitly correlated (F12) approaches. The basis set convergence of both conventional and explicitly correlated methods has been investigated in detail, both with and without counterpoise corrections. For the MP2 and CCSD-MP2 contributions, rapid basis set convergence observed with explicitly correlated methods is compared to conventional methods. However, conventional, orbital-based calculations are preferred for the calculation of the (T) term, since it does not benefit from F12. CCSD(F12*) converges somewhat faster with the basis set than CCSD-F12b for the CCSD-MP2 term. The performance of various DFT methods is also evaluated for the BEGDB data set, and results show that Head-Gordon's ωB97X-V and ωB97M-V functionals outperform all other DFT functionals. Counterpoise-corrected DSD-PBEP86 and raw DSD-PBEPBE-NL also perform well and are close to MP2 results. In the WATER27 data set, the anionic (deprotonated) water clusters exhibit unacceptably slow basis set convergence with the regular cc-pVnZ-F12 basis sets, which have only diffuse s and p functions. To overcome this, we have constructed modified basis sets, denoted aug-cc-pVnZ-F12 or aVnZ-F12, which have been augmented with diffuse functions on the higher angular momenta. The calculated final dissociation energies of BEGDB and WATER27 data sets are available in the Supporting Information. Our best calculated dissociation energies can be reproduced through n-body expansion, provided one pushes to the basis set and electron correlation limit for the two-body term; for the three-body term, post-MP2 contributions (particularly CCSD-MP2) are important for capturing the three-body dispersion effects. Terms beyond four-body can be adequately captured at the MP2-F12 level.

  16. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  17. The structure and energetics of Cr(CO)6 and Cr(CO)5

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.; Liu, Bowen; Lindh, Roland

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD, and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD, and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used. Calculations using larger basis sets reduce the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. In the largest basis set, the total CO binding energy of Cr(CO)6 is estimated to be 140 kcal/mol at the CCSD(T) level of theory, or about 86 percent of the experimental value. The remaining discrepancy between the experimental and theoretical value is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive set (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  18. Hybrid method for moving interface problems with application to the Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, T.Y.; Li, Zhilin; Osher, S.

    In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less

  19. Requirement Development Process and Tools

    NASA Technical Reports Server (NTRS)

    Bayt, Robert

    2017-01-01

    Requirements capture the system-level capabilities in a set of complete, necessary, clear, attainable, traceable, and verifiable statements of need. Requirements should not be unduly restrictive, but should set limits that eliminate items outside the boundaries drawn, encourage competition (or alternatives), and capture source and reason of requirement. If it is not needed by the customer, it is not a requirement. They establish the verification methods that will lead to product acceptance. These must be reproducible assessment methods.

  20. Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety

    PubMed Central

    Waterson, Patrick; Robertson, Michelle M.; Cooke, Nancy J.; Militello, Laura; Roth, Emilie; Stanton, Neville A.

    2015-01-01

    An important part of the application of sociotechnical systems theory (STS) is the development of methods, tools and techniques to assess human factors and ergonomics workplace requirements. We focus in this paper on describing and evaluating current STS methods for workplace safety, as well as outlining a set of six case studies covering the application of these methods to a range of safety contexts. We also describe an evaluation of the methods in terms of ratings of their ability to address a set of theoretical and practical questions (e.g. the degree to which methods capture static/dynamic aspects of tasks and interactions between system levels). The outcomes from the evaluation highlight a set of gaps relating to the coverage and applicability of current methods for STS and safety (e.g. coverage of external influences on system functioning; method usability). The final sections of the paper describe a set of future challenges, as well as some practical suggestions for tackling these. Practitioner Summary: We provide an up-to-date review of STS methods, a set of case studies illustrating their use and an evaluation of their strengths and weaknesses. The paper concludes with a ‘roadmap’ for future work. PMID:25832121

  1. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    EPA Science Inventory

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  2. Alzheimer's disease detection via automatic 3D caudate nucleus segmentation using coupled dictionary learning with level set formulation.

    PubMed

    Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo

    2016-12-01

    This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Characteristics of ovulation method acceptors: a cross-cultural assessment.

    PubMed

    Klaus, H; Labbok, M; Barker, D

    1988-01-01

    Five programs of instruction in the ovulation method (OM) in diverse geographic and cultural settings are described, and characteristics of approximately 200 consecutive OM acceptors in each program are examined. Major findings include: the religious background and family size of acceptors are variable, as is the level of previous contraceptive use. Acceptors are drawn from a wide range of socioeconomic and religious backgrounds; however, family planning intention was similarly distributed in all five countries. In sum, the ovulation method is accepted by persons from a variety of backgrounds within and between cultural setting.

  4. Estimating Transmissivity from the Water Level Fluctuations of a Sinusoidally Forced Well

    USGS Publications Warehouse

    Mehnert, E.; Valocchi, A.J.; Heidari, M.; Kapoor, S.G.; Kumar, P.

    1999-01-01

    The water levels in wells are known to fluctuate in response to earth tides and changes in atmospheric pressure. These water level fluctuations can be analyzed to estimate transmissivity (T). A new method to estimate transmissivity, which assumes that the atmospheric pressure varies in a sinusoidal fashion, is presented. Data analysis for this simplified method involves using a set of type curves and estimating the ratio of the amplitudes of the well response over the atmospheric pressure. Type curves for this new method were generated based on a model for ground water flow between the well and aquifer developed by Cooper et al. (1965). Data analysis with this method confirmed these published results: (1) the amplitude ratio is a function of transmissivity, the well radius, and the frequency of the sinusoidal oscillation; and (2) the amplitude ratio is a weak function of storativity. Compared to other methods, the developed method involves simpler, more intuitive data analysis and allows shorter data sets to be analyzed. The effect of noise on estimating the amplitude ratio was evaluated and found to be more significant at lower T. For aquifers with low T, noise was shown to mask the water level fluctuations induced by atmospheric pressure changes. In addition, reducing the length of the data series did not affect the estimate of T, but the variance of the estimate was higher for the shorter series of noisy data.

  5. A hybrid approach of gene sets and single genes for the prediction of survival risks with gene expression data.

    PubMed

    Seok, Junhee; Davis, Ronald W; Xiao, Wenzhong

    2015-01-01

    Accumulated biological knowledge is often encoded as gene sets, collections of genes associated with similar biological functions or pathways. The use of gene sets in the analyses of high-throughput gene expression data has been intensively studied and applied in clinical research. However, the main interest remains in finding modules of biological knowledge, or corresponding gene sets, significantly associated with disease conditions. Risk prediction from censored survival times using gene sets hasn't been well studied. In this work, we propose a hybrid method that uses both single gene and gene set information together to predict patient survival risks from gene expression profiles. In the proposed method, gene sets provide context-level information that is poorly reflected by single genes. Complementarily, single genes help to supplement incomplete information of gene sets due to our imperfect biomedical knowledge. Through the tests over multiple data sets of cancer and trauma injury, the proposed method showed robust and improved performance compared with the conventional approaches with only single genes or gene sets solely. Additionally, we examined the prediction result in the trauma injury data, and showed that the modules of biological knowledge used in the prediction by the proposed method were highly interpretable in biology. A wide range of survival prediction problems in clinical genomics is expected to benefit from the use of biological knowledge.

  6. A Hybrid Approach of Gene Sets and Single Genes for the Prediction of Survival Risks with Gene Expression Data

    PubMed Central

    Seok, Junhee; Davis, Ronald W.; Xiao, Wenzhong

    2015-01-01

    Accumulated biological knowledge is often encoded as gene sets, collections of genes associated with similar biological functions or pathways. The use of gene sets in the analyses of high-throughput gene expression data has been intensively studied and applied in clinical research. However, the main interest remains in finding modules of biological knowledge, or corresponding gene sets, significantly associated with disease conditions. Risk prediction from censored survival times using gene sets hasn’t been well studied. In this work, we propose a hybrid method that uses both single gene and gene set information together to predict patient survival risks from gene expression profiles. In the proposed method, gene sets provide context-level information that is poorly reflected by single genes. Complementarily, single genes help to supplement incomplete information of gene sets due to our imperfect biomedical knowledge. Through the tests over multiple data sets of cancer and trauma injury, the proposed method showed robust and improved performance compared with the conventional approaches with only single genes or gene sets solely. Additionally, we examined the prediction result in the trauma injury data, and showed that the modules of biological knowledge used in the prediction by the proposed method were highly interpretable in biology. A wide range of survival prediction problems in clinical genomics is expected to benefit from the use of biological knowledge. PMID:25933378

  7. Level-Set Simulation of Viscous Free Surface Flow Around a Commercial Hull Form

    DTIC Science & Technology

    2005-04-15

    Abstract The viscous free surface flow around a 3600 TEU KRISO Container Ship is computed using the finite volume based multi-block RANS code, WAVIS...developed at KRISO . The free surface is captured with the Level-set method and the realizable k-ε model is employed for turbulence closure. The...computations are done for a 3600 TEU container ship of Korea Research Institute of Ships & Ocean Engineering, KORDI (hereafter, KRISO ) selected as

  8. Feasibility of a shorter Goal Attainment Scaling method for a pediatric spasticity clinic - The 3-milestones GAS.

    PubMed

    Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M

    2017-07-01

    Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  9. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  10. [Establishment of policy indicators of adaptation to the impact of climate change on the transmission of schistosomiasis and malaria in China].

    PubMed

    Qian, Ying-Jun; Li, Shi-Zhu; Xu, Jun-Fang; Zhang, Li; Fu, Qing; Zhou, Xiao-Nong

    2013-12-01

    To set up a framework of indicators for schistosomiasis and malaria to guide the formulation and evaluation of vector-borne disease control policies focusing on adaptation to the negative impact of climate change. A 2-level indicator framework was set up on the basis of literature review, and Delphi method was applied to a total of 22 and 19 experts working on schistosomiasis and malaria, respectively. The result was analyzed to calculate the weight of various indicators. A total of 41 questionnaires was delivered, and 38 with valid response (92.7%). The system included 4 indicators at first level, i.e. surveillance, scientific research, disease control and intervention, and adaptation capacity building, with 25 indicators for schistosomiasis and 21 for malaria at the second level. Among indicators at the first level, disease surveillance ranked first with a weight of 0.32. Among the indicators at the second level, vector monitoring scored the highest in terms of both schistosomiasis and malaria. The indicators set up by Delphi method are practical,universal and effective ones using in the field, which is also useful to technically support the establishment of adaptation to climate change in the field of public health.

  11. Using level set based inversion of arrival times to recover shear wave speed in transient elastography and supersonic imaging

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce; Renzi, Daniel

    2006-04-01

    Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low-frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. Here we develop a fast level set based algorithm for finding the shear wave speed from the interior positions of the propagating front. We compare the performance of level curve methods developed here and our previously developed (McLaughlin J and Renzi D 2006 Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts Inverse Problems 22 681-706) distance methods. We give reconstruction examples from synthetic data and from data obtained from a phantom experiment accomplished by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).

  12. GeneTopics - interpretation of gene sets via literature-driven topic models

    PubMed Central

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly tied to relevant articles in the literature. Extending a general topic model method, the approach introduced here establishes a workflow for the interpretation of gene sets generated from diverse experimental scenarios that can complement the classical approach of comparison to reference gene sets. PMID:24564875

  13. LEAP into the Pfizer Global Virtual Library (PGVL) space: creation of readily synthesizable design ideas automatically.

    PubMed

    Hu, Qiyue; Peng, Zhengwei; Kostrowicki, Jaroslav; Kuki, Atsuo

    2011-01-01

    Pfizer Global Virtual Library (PGVL) of 10(13) readily synthesizable molecules offers a tremendous opportunity for lead optimization and scaffold hopping in drug discovery projects. However, mining into a chemical space of this size presents a challenge for the concomitant design informatics due to the fact that standard molecular similarity searches against a collection of explicit molecules cannot be utilized, since no chemical information system could create and manage more than 10(8) explicit molecules. Nevertheless, by accepting a tolerable level of false negatives in search results, we were able to bypass the need for full 10(13) enumeration and enabled the efficient similarity search and retrieval into this huge chemical space for practical usage by medicinal chemists. In this report, two search methods (LEAP1 and LEAP2) are presented. The first method uses PGVL reaction knowledge to disassemble the incoming search query molecule into a set of reactants and then uses reactant-level similarities into actual available starting materials to focus on a much smaller sub-region of the full virtual library compound space. This sub-region is then explicitly enumerated and searched via a standard similarity method using the original query molecule. The second method uses a fuzzy mapping onto candidate reactions and does not require exact disassembly of the incoming query molecule. Instead Basis Products (or capped reactants) are mapped into the query molecule and the resultant asymmetric similarity scores are used to prioritize the corresponding reactions and reactant sets. All sets of Basis Products are inherently indexed to specific reactions and specific starting materials. This again allows focusing on a much smaller sub-region for explicit enumeration and subsequent standard product-level similarity search. A set of validation studies were conducted. The results have shown that the level of false negatives for the disassembly-based method is acceptable when the query molecule can be recognized for exact disassembly, and the fuzzy reaction mapping method based on Basis Products has an even better performance in terms of lower false-negative rate because it is not limited by the requirement that the query molecule needs to be recognized by any disassembly algorithm. Both search methods have been implemented and accessed through a powerful desktop molecular design tool (see ref. (33) for details). The chapter will end with a comparison of published search methods against large virtual chemical space.

  14. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  15. Privacy Protection by Matrix Transformation

    NASA Astrophysics Data System (ADS)

    Yang, Weijia

    Privacy preserving is indispensable in data mining. In this paper, we present a novel clustering method for distributed multi-party data sets using orthogonal transformation and data randomization techniques. Our method can not only protect privacy in face of collusion, but also achieve a higher level of accuracy compared to the existing methods.

  16. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means+.

    PubMed

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2017-04-01

    Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Crystal structure, spectroscopic studies and quantum mechanical calculations of 2-[((3-iodo-4-methyl)phenylimino)methyl]-5-nitrothiophene.

    PubMed

    Özdemir Tarı, Gonca; Gümüş, Sümeyye; Ağar, Erbil

    2015-04-15

    The title compound, 2-[((3-iodo-4-methyl)phenylimino)methyl]-5-nitrothiophene, C12H9O2N2I1S1, was synthesized and characterized by IR, UV-Vis and single-crystal X-ray diffraction technique. The molecular structure was optimized at the B3LYP, B3PW91 and PBEPBE levels of the density functional method (DFT) with the 6-311G+(d,p) basis set. Using the TD-DFT method, the electronic absorption spectra of the title compound was computed in both the gas phase and ethanol solvent. The harmonic vibrational frequencies of the title compound were calculated using the same methods with the 6-311G+(d,p) basis set. The calculated results were compared with the experimental determination results of the compound. The energetic behavior such as the total energy, atomic charges, dipole moment of the title compound in solvent media were examined using the B3LYP, B3PW91 and PBEPBE methods with the 6-311G+(d,p) basis set by applying the Onsager and the polarizable continuum model (PCM). The molecular orbitals (FMOs) analysis, the molecular electrostatic potential map (MEP) and the nonlinear optical properties (NLO) for the title compound were obtained with the same levels of theory. And then thermodynamic properties for the title compound were obtained using the same methods with the 6-311G(d,p) basis set. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bdzil, John Bohdan

    The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver,more » and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-­resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-­set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-­supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-­tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-­set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.« less

  19. Approaches, tools and methods used for setting priorities in health research in the 21st century

    PubMed Central

    Yoshida, Sachiyo

    2016-01-01

    Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods. PMID:26401271

  20. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  1. Let them fall where they may: congruence analysis in massive phylogenetically messy data sets.

    PubMed

    Leigh, Jessica W; Schliep, Klaus; Lopez, Philippe; Bapteste, Eric

    2011-10-01

    Interest in congruence in phylogenetic data has largely focused on issues affecting multicellular organisms, and animals in particular, in which the level of incongruence is expected to be relatively low. In addition, assessment methods developed in the past have been designed for reasonably small numbers of loci and scale poorly for larger data sets. However, there are currently over a thousand complete genome sequences available and of interest to evolutionary biologists, and these sequences are predominantly from microbial organisms, whose molecular evolution is much less frequently tree-like than that of multicellular life forms. As such, the level of incongruence in these data is expected to be high. We present a congruence method that accommodates both very large numbers of genes and high degrees of incongruence. Our method uses clustering algorithms to identify subsets of genes based on similarity of phylogenetic signal. It involves only a single phylogenetic analysis per gene, and therefore, computation time scales nearly linearly with the number of genes in the data set. We show that our method performs very well with sets of sequence alignments simulated under a wide variety of conditions. In addition, we present an analysis of core genes of prokaryotes, often assumed to have been largely vertically inherited, in which we identify two highly incongruent classes of genes. This result is consistent with the complexity hypothesis.

  2. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  3. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  4. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  5. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  6. Max-margin multiattribute learning with low-rank constraint.

    PubMed

    Zhang, Qiang; Chen, Lin; Li, Baoxin

    2014-07-01

    Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.

  7. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  8. Evaluation of an Active Humidification System for Inspired Gas

    PubMed Central

    Roux, Nicolás G.; Villalba, Darío S.; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L.; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto

    2015-01-01

    Objectives The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Methods Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. Results While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. Conclusion According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate. PMID:25729499

  9. Approaches, tools and methods used for setting priorities in health research in the 21(st) century.

    PubMed

    Yoshida, Sachiyo

    2016-06-01

    Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods.

  10. Computing interface motion in compressible gas dynamics

    NASA Technical Reports Server (NTRS)

    Mulder, W.; Osher, S.; Sethan, James A.

    1992-01-01

    An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.

  11. Does the accuracy of blood pressure measurement correlate with hearing loss of the observer?

    PubMed

    Song, Soohwa; Lee, Jongshill; Chee, Youngjoon; Jang, Dong Pyo; Kim, In Young

    2014-02-01

    The auscultatory method is influenced by the hearing level of the observers. If the observer has hearing loss, it is possible to measure blood pressure inaccurately by misreading the Korotkoff sounds at systolic blood pressure (SBP) and diastolic blood pressure (DBP). Because of the potential clinical problems this discrepancy may cause, we used a hearing loss simulator to determine how hearing level affects the accuracy of blood pressure measurements. Two data sets (data set A, 32 Korotkoff sound video clips recorded by the British Hypertension Society; data set B, 28 Korotkoff sound data acquired from the Korotkoff sound recording system developed by Hanyang University) were used and all the data were attenuated to simulate a hearing loss of 5, 10, 15, 20, and 25 dB using the hearing loss simulator. Five observers with normal hearing assessed the blood pressures from these data sets and the differences between the values measured from the original recordings (no attenuation) and the attenuated versions were analyzed. Greater attenuation of the Korotkoff sounds, or greater hearing loss, resulted in larger blood pressure measurement differences when compared with the original data. When measuring blood pressure with hearing loss, the SBP tended to be underestimated and the DBP was overestimated. The mean differences between the original data and the 25 dB hearing loss data for the two data sets combined were 1.55±2.71 and -4.32±4.21 mmHg for SBP and DBP, respectively. This experiment showed that the accuracy of blood pressure measurements using the auscultatory method is affected by observer hearing level. Therefore, to reduce possible error using the auscultatory method, observers' hearing should be tested.

  12. SigniSite: Identification of residue-level genotype-phenotype correlations in protein multiple sequence alignments.

    PubMed

    Jessen, Leon Eyrich; Hoof, Ilka; Lund, Ole; Nielsen, Morten

    2013-07-01

    Identifying which mutation(s) within a given genotype is responsible for an observable phenotype is important in many aspects of molecular biology. Here, we present SigniSite, an online application for subgroup-free residue-level genotype-phenotype correlation. In contrast to similar methods, SigniSite does not require any pre-definition of subgroups or binary classification. Input is a set of protein sequences where each sequence has an associated real number, quantifying a given phenotype. SigniSite will then identify which amino acid residues are significantly associated with the data set phenotype. As output, SigniSite displays a sequence logo, depicting the strength of the phenotype association of each residue and a heat-map identifying 'hot' or 'cold' regions. SigniSite was benchmarked against SPEER, a state-of-the-art method for the prediction of specificity determining positions (SDP) using a set of human immunodeficiency virus protease-inhibitor genotype-phenotype data and corresponding resistance mutation scores from the Stanford University HIV Drug Resistance Database, and a data set of protein families with experimentally annotated SDPs. For both data sets, SigniSite was found to outperform SPEER. SigniSite is available at: http://www.cbs.dtu.dk/services/SigniSite/.

  13. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    PubMed Central

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process and outcome measures during priority setting and resource allocation. PMID:29511741

  14. Effect of Liquid Penetrant Sensitivity on Probability of Detection

    NASA Technical Reports Server (NTRS)

    Parker, Bradford H.

    2011-01-01

    The objective of the task is to investigate the effect of liquid penetrant sensitivity level on probability of detection (POD) of cracks in various metals. NASA-STD-5009 currently requires the use of only sensitivity level 4 liquid penetrants for NASA Standard Level inspections. This requirement is based on the fact that the data used to establish the reliably detectable flaw sizes penetrant inspection was from studies performed in the 1970s using penetrant deemed to be equivalent only to modern day sensitivity level 4 penetrants. However, many NDE contractors supporting NASA Centers routinely use sensitivity level 3 penetrants. Because of the new NASA-STD-5009 requirement, these contractors will have to either shift to sensitivity level 4 penetrants or perform formal POD demonstration tests to qualify their existing process. We propose a study to compare the POD generated for two penetrant manufactures, Sherwin and Magnaflux, and for the two most common penetrant inspection methods, water washable and post emulsifiable, hydrophilic. NDE vendors local to GSFC will be employed. A total of six inspectors will inspect a set of crack panels with a broad range of fatigue crack sizes. Each inspector will perform eight inspections of the panel set using the combination of methods and sensitivity levels described above. At least one inspector will also perform multiple inspections using a fixed technique to investigate repeatability. The hit/miss data sets will be evaluated using both the NASA generated DOEPOD software and the MIL-STD-1823 software.

  15. Expression of SET Protein in the Ovaries of Patients with Polycystic Ovary Syndrome

    PubMed Central

    Boqun, Xu; Xiaonan, Dai; YuGui, Cui; Lingling, Gao; Xue, Dai; Gao, Chao; Feiyang, Diao; Jiayin, Liu; Gao, Li; Li, Mei; Zhang, Yuan; Ma, Xiang

    2013-01-01

    Background. We previously found that expression of SET gene was up-regulated in polycystic ovaries by using microarray. It suggested that SET may be an attractive candidate regulator involved in the pathophysiology of polycystic ovary syndrome (PCOS). In this study, expression and cellular localization of SET protein were investigated in human polycystic and normal ovaries. Method. Ovarian tissues, six normal ovaries and six polycystic ovaries, were collected during transsexual operation and surgical treatment with the signed consent form. The cellular localization of SET protein was observed by immunohistochemistry. The expression levels of SET protein were analyzed by Western Blot. Result. SET protein was expressed predominantly in the theca cells and oocytes of human ovarian follicles in both PCOS ovarian tissues and normal ovarian tissues. The level of SET protein expression in polycystic ovaries was triple higher than that in normal ovaries (P < 0.05). Conclusion. SET was overexpressed in polycystic ovaries more than that in normal ovaries. Combined with its localization in theca cells, SET may participate in regulating ovarian androgen biosynthesis and the pathophysiology of hyperandrogenism in PCOS. PMID:23861679

  16. Expression of SET Protein in the Ovaries of Patients with Polycystic Ovary Syndrome.

    PubMed

    Boqun, Xu; Xiaonan, Dai; Yugui, Cui; Lingling, Gao; Xue, Dai; Gao, Chao; Feiyang, Diao; Jiayin, Liu; Gao, Li; Li, Mei; Zhang, Yuan; Ma, Xiang

    2013-01-01

    Background. We previously found that expression of SET gene was up-regulated in polycystic ovaries by using microarray. It suggested that SET may be an attractive candidate regulator involved in the pathophysiology of polycystic ovary syndrome (PCOS). In this study, expression and cellular localization of SET protein were investigated in human polycystic and normal ovaries. Method. Ovarian tissues, six normal ovaries and six polycystic ovaries, were collected during transsexual operation and surgical treatment with the signed consent form. The cellular localization of SET protein was observed by immunohistochemistry. The expression levels of SET protein were analyzed by Western Blot. Result. SET protein was expressed predominantly in the theca cells and oocytes of human ovarian follicles in both PCOS ovarian tissues and normal ovarian tissues. The level of SET protein expression in polycystic ovaries was triple higher than that in normal ovaries (P < 0.05). Conclusion. SET was overexpressed in polycystic ovaries more than that in normal ovaries. Combined with its localization in theca cells, SET may participate in regulating ovarian androgen biosynthesis and the pathophysiology of hyperandrogenism in PCOS.

  17. High Fidelity Simulation Experience in Emergency settings: doctors and nurses satisfaction levels.

    PubMed

    Calamassi, Diletta; Nannelli, Tiziana; Guazzini, Andrea; Rasero, Laura; Bambi, Stefano

    2016-11-22

    Lots of studies describe High Fidelity Simulation (HFS) as an experience well-accepted by the learners. This study has explored doctors and nurses satisfaction levels during HFS sessions, searching the associations with the setting of simulation events (simulation center or on the field simulation). Moreover, we studied the correlation between HFS experience satisfaction levels and the socio-demographic features of the participants. Mixed method study, using the Satisfaction of High-Fidelity Simulation Experience (SESAF) questionnaire through an online survey. SESAF was administered to doctors and nurses who previously took part to HFS sessions in a simulation center or in the field. Quantitative data were analyzed through descriptive and inferential statistics methods; qualitative data was performed through the Giorgi method. 143 doctors and 94 nurses filled the questionnaire. The satisfaction level was high: on a 10 points scale, the mean score was 8.17 (SD±1.924). There was no significant difference between doctors and nurses satisfaction levels in almost all the SESAF factors. We didn't find any correlation between gender and HFS experience satisfaction levels. The knowledge of theoretical aspects of the simulated case before the HFS experience is related to a higher general satisfaction (r=0.166 p=0.05), a higher effectiveness of debriefing (r=0,143 p=0,05), and a higher professional impact (r=0.143 p=0.05). The respondents that performed a HFS on the field, were more satisfied than the others, and experienced a higher "professional impact", "clinical reasoning and self efficacy", and "team dynamics" (p< 0,01). Narrative data suggest that HFS facilitators should improve their behaviors during the debriefing. Healthcare managers should extend the HFS to all kind of healthcare workers in real clinical settings. There is the need to improve and implement the communication competences of HFS facilitators.

  18. Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.

    2017-11-01

    Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.

  19. Assessing the impact of different satellite retrieval methods on forecast available potential energy

    NASA Technical Reports Server (NTRS)

    Whittaker, Linda M.; Horn, Lyle H.

    1990-01-01

    The effects of the inclusion of satellite temperature retrieval data, and of different satellite retrieval methods, on forecasts made with the NASA Goddard Laboratory for Atmospheres (GLA) fourth-order model were investigated using, as the parameter, the available potential energy (APE) in its isentropic form. Calculation of the APE were used to study the differences in the forecast sets both globally and in the Northern Hemisphere during 72-h forecast period. The analysis data sets used for the forecasts included one containing the NESDIS TIROS-N retrievals, the GLA retrievals using the physical inversion method, and a third, which did not contain satellite data, used as a control; two data sets, with and without satellite data, were used for verification. For all three data sets, the Northern Hemisphere values for the total APE showed an increase throughout the forecast period, mostly due to an increase in the zonal component, in contrast to the verification sets, which showed a steady level of total APE.

  20. Muscular dystrophy summer camp: a case study of a non-traditional level I fieldwork using a collaborative supervision model.

    PubMed

    Provident, Ingrid M; Colmer, Maria A

    2013-01-01

    A shortage of traditional medical fieldwork placements has been reported in the United States. Alternative settings are being sought to meet the Accreditation Standards for Level I fieldwork. This study was designed to examine and report the outcomes of an alternative pediatric camp setting, using a group model of supervision to fulfill the requirements for Level I fieldwork. Thirty-seven students from two Pennsylvania OT schools. Two cohorts of students were studied over a two year period using multiple methods of retrospective review and data collection. Students supervised in a group model experienced positive outcomes, including opportunities to deliver client centered care, and understanding the role of caregiving for children with disabilities. The use of a collaborative model of fieldwork education at a camp setting has resulted in a viable approach for the successful attainment of Level I fieldwork objectives for multiple students under a single supervisor.

  1. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  2. 76 FR 43231 - Receipt of Several Pesticide Petitions Filed for Residues of Pesticide Chemicals in or on Various...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ... of food with residues at or above the levels set in these tolerances. The Analytical Chemistry... the limit of detection of the designated method. In plants, the method is aqueous organic solvent...

  3. 40 CFR 265.1034 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane. (5) The background level shall be determined as set forth in Reference Method 21. (6) The... or exiting control device, as determined by Method 2, dscm/h; n = Number of organic compounds in the...

  4. 40 CFR 265.1034 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane. (5) The background level shall be determined as set forth in Reference Method 21. (6) The... or exiting control device, as determined by Method 2, dscm/h; n = Number of organic compounds in the...

  5. 40 CFR 265.1034 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane. (5) The background level shall be determined as set forth in Reference Method 21. (6) The... or exiting control device, as determined by Method 2, dscm/h; n = Number of organic compounds in the...

  6. 40 CFR 265.1034 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane. (5) The background level shall be determined as set forth in Reference Method 21. (6) The... or exiting control device, as determined by Method 2, dscm/h; n = Number of organic compounds in the...

  7. 40 CFR 265.1034 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or n-hexane and air at a concentration of approximately, but less than, 10,000 ppm methane or n-hexane. (5) The background level shall be determined as set forth in Reference Method 21. (6) The... or exiting control device, as determined by Method 2, dscm/h; n = Number of organic compounds in the...

  8. Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting

    NASA Astrophysics Data System (ADS)

    Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang

    2016-12-01

    Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.

  9. The Effects of Massage Therapy on Pain Management in the Acute Care Setting

    PubMed Central

    Adams, Rose; White, Barb; Beckett, Cynthia

    2010-01-01

    Background Pain management remains a critical issue for hospitals and is receiving the attention of hospital accreditation organizations. The acute care setting of the hospital provides an excellent opportunity for the integration of massage therapy for pain management into the team-centered approach of patient care. Purpose and Setting This preliminary study evaluated the effect of the use of massage therapy on inpatient pain levels in the acute care setting. The study was conducted at Flagstaff Medical Center in Flagstaff, Arizona—a nonprofit community hospital serving a large rural area of northern Arizona. Method A convenience sample was used to identify research participants. Pain levels before and after massage therapy were recorded using a 0 – 10 visual analog scale. Quantitative and qualitative methods were used for analysis of this descriptive study. Participants Hospital inpatients (n = 53) from medical, surgical, and obstetrics units participated in the current research by each receiving one or more massage therapy sessions averaging 30 minutes each. The number of sessions received depended on the length of the hospital stay. Result Before massage, the mean pain level recorded by the patients was 5.18 [standard deviation (SD): 2.01]. After massage, the mean pain level was 2.33 (SD: 2.10). The observed reduction in pain was statistically significant: paired samples t52 = 12.43, r = .67, d = 1.38, p < .001. Qualitative data illustrated improvement in all areas, with the most significant areas of impact reported being overall pain level, emotional well-being, relaxation, and ability to sleep. Conclusions This study shows that integration of massage therapy into the acute care setting creates overall positive results in the patient’s ability to deal with the challenging physical and psychological aspects of their health condition. The study demonstrated not only significant reduction in pain levels, but also the interrelatedness of pain, relaxation, sleep, emotions, recovery, and finally, the healing process. PMID:21589696

  10. Precise determination of time to reach viral load set point after acute HIV-1 infection.

    PubMed

    Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao

    2012-12-01

    The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.

  11. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  12. A level-set method for pathology segmentation in fluorescein angiograms and en face retinal images of patients with age-related macular degeneration

    NASA Astrophysics Data System (ADS)

    Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz

    2013-03-01

    The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.

  13. Compliance with the guide for commissioning oral surgery: an audit and discussion.

    PubMed

    Modgill, O; Shah, A

    2017-10-13

    Introduction The Guide for commissioning oral surgery and oral medicine published by NHS England (2015) prescribes the level of complexity of oral surgery and oral medicine investigations and procedures to be carried out within NHS services. These are categorised as Level 1, Level 2, Level 3A and Level 3B. An audit was designed to ascertain the level of oral surgery procedures performed by clinicians of varying experience and qualification working in a large oral surgery department within a major teaching hospital.Materials and methods Two audit cycles were conducted on retrospective case notes and radiographic review of 100 patient records undergoing dental extractions within the Department of Oral Surgery at King's College Dental Hospital. The set gold standard was: '100% of Level 1 procedures should be performed by dental undergraduates or discharged back to the referring general dental practitioner'. Data were collected and analysed on a Microsoft Excel spreadsheet. The results of the first audit cycle were presented to all clinicians within the department in a formal meeting, recommendations were made and an action plan implemented prior to undertaking a second cycle.Results The first cycle revealed that 25% of Level 1 procedures met the set gold standard, with Level 2 practitioners performing the majority of Level 1 and Level 2 procedures. The second cycle showed a marked improvement, with 66% of Level 1 procedures meeting the set gold standard.Conclusion Our audit demonstrates that whilst we were able to achieve an improvement with the set gold standard, several barriers still remain to ensure that patients are treated by the appropriate level of clinician in a secondary care setting. We have used this audit as a foundation upon which to discuss the challenges faced in implementation of the commissioning framework within both primary and secondary dental care and strategies to overcome these challenges, which are likely to be encountered in any NHS care setting in which oral surgery procedures are performed.

  14. Effective Multi-Query Expansions: Collaborative Deep Networks for Robust Landmark Retrieval.

    PubMed

    Wang, Yang; Lin, Xuemin; Wu, Lin; Zhang, Wenjie

    2017-03-01

    Given a query photo issued by a user (q-user), the landmark retrieval is to return a set of photos with their landmarks similar to those of the query, while the existing studies on the landmark retrieval focus on exploiting geometries of landmarks for similarity matches between candidate photos and a query photo. We observe that the same landmarks provided by different users over social media community may convey different geometry information depending on the viewpoints and/or angles, and may, subsequently, yield very different results. In fact, dealing with the landmarks with low quality shapes caused by the photography of q-users is often nontrivial and has seldom been studied. In this paper, we propose a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps. First, we identify the top- k photos regarding the latent topics of a query landmark to construct multi-query set so as to remedy its possible low quality shape. For this purpose, we significantly extend the techniques of Latent Dirichlet Allocation. Then, motivated by the typical collaborative filtering methods, we propose to learn a collaborative deep networks-based semantically, nonlinear, and high-level features over the latent factor for landmark photo as the training set, which is formed by matrix factorization over collaborative user-photo matrix regarding the multi-query set. The learned deep network is further applied to generate the features for all the other photos, meanwhile resulting into a compact multi-query set within such space. Then, the final ranking scores are calculated over the high-level feature space between the multi-query set and all other photos, which are ranked to serve as the final ranking list of landmark retrieval. Extensive experiments are conducted on real-world social media data with both landmark photos together with their user information to show the superior performance over the existing methods, especially our recently proposed multi-query based mid-level pattern representation method [1].

  15. Test set readings predict clinical performance to a limited extent: preliminary findings

    NASA Astrophysics Data System (ADS)

    Soh, BaoLin P.; Lee, Warwick M.; Kench, Peter L.; Reed, Warren M.; McEntee, Mark F.; Brennan, Patrick C.

    2013-03-01

    Aim: To investigate the level of agreement between test sets and actual clinical reading Background: The performance of screen readers in detecting breast cancer is being assessed in some countries by using mammographic test sets. However, previous studies have provided little evidence that performance assessed by test sets strongly correlate to performance in clinical reading. Methods: Five clinicians from BreastScreen New South Wales participated in this study. Each clinician was asked to read 200 de-identified mammographic examinations gathered from their own case history within the BreastScreen NSW Digital Imaging Library. All test sets were designed with specific proportions of true positive, true negative, false positive and false negative examinations from the previous actual clinical reads of each reader. A prior mammogram examination for comparison (when available) was also provided for each case. Results: Preliminary analyses have shown that there is a moderate level of agreement (Kappa 0.42-0.56, p < 0.001) between laboratory test sets and actual clinical reading. In addition, a mean increase of 38% in sensitivity in the laboratory test sets as compared to their actual clinical readings was demonstrated. Specificity is similar between the laboratory test sets and actual clinical reading. Conclusion: This study demonstrated a moderate level of agreement between actual clinical reading and test set reading, which suggests that test sets have a role in reflecting clinical performance.

  16. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    PubMed Central

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. PMID:26673332

  17. Identifying arbitrary parameter zonation using multiple level set functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  18. Identifying arbitrary parameter zonation using multiple level set functions

    DOE PAGES

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    2018-03-14

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  19. Identifying arbitrary parameter zonation using multiple level set functions

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan

    2018-07-01

    In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.

  20. High-level intuitive features (HLIFs) for intuitive skin lesion description.

    PubMed

    Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A

    2015-03-01

    A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.

  1. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  2. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    NASA Astrophysics Data System (ADS)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  3. A method for monitoring intensity during aquatic resistance exercises.

    PubMed

    Colado, Juan C; Tella, Victor; Triplett, N Travis

    2008-11-01

    The aims of this study were (i) to check whether monitoring of both the rhythm of execution and the perceived effort is a valid tool for reproducing the same intensity of effort in different sets of the same aquatic resistance exercise (ARE) and (ii) to assess whether this method allows the ARE to be put at the same intensity level as its equivalent carried out on dry land. Four healthy trained young men performed horizontal shoulder abduction and adduction (HSAb/Ad) movements in water and on dry land. Muscle activation was recorded using surface electromyography of 1 stabilizer and several agonist muscles. Before the final tests, the ARE movement cadence was established individually following a rhythmic digitalized sequence of beats to define the alternate HSAb/Ad movements. This cadence allowed the subject to perform 15 repetitions at a perceived exertion of 9-10 using Hydro-Tone Bells. After that, each subject performed 2 nonconsecutive ARE sets. The dry land exercises (1 set of HSAb and 1 set of HSAd) were performed using a dual adjustable pulley cable motion machine, with the previous selection of weights that allowed the same movement cadence to be maintained and the completion of the same repetitions in each of the sets as with the ARE. The average normalized data were compared for the exercises in order to determine possible differences in muscle activity. The results show the validity of this method for reproducing the intensity of effort in different sets of the same ARE, but is not valid for matching the same intensity level as kinematically similar land-based exercises.

  4. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  5. Comparison of the performance of different DFT methods in the calculations of the molecular structure and vibration spectra of serotonin (5-hydroxytryptamine, 5-HT)

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Gao, Hongwei

    2012-04-01

    Serotonin (5-hydroxytryptamine, 5-HT) is a monoamine neurotransmitter which plays an important role in treating acute or clinical stress. The comparative performance of different density functional theory (DFT) methods at various basis sets in predicting the molecular structure and vibration spectra of serotonin was reported. The calculation results of different methods including mPW1PW91, HCTH, SVWN, PBEPBE, B3PW91 and B3LYP with various basis sets including LANL2DZ, SDD, LANL2MB, 6-31G, 6-311++G and 6-311+G* were compared with the experimental data. It is remarkable that the SVWN/6-311++G and SVWN/6-311+G* levels afford the best quality to predict the structure of serotonin. The results also indicate that PBEPBE/LANL2DZ level show better performance in the vibration spectra prediction of serotonin than other DFT methods.

  6. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  7. Determination of Problematic ICD-9-CM Subcategories for Further Study of Coding Performance: Delphi Method

    PubMed Central

    Zeng, Xiaoming; Bell, Paul D

    2011-01-01

    In this study, we report on a qualitative method known as the Delphi method, used in the first part of a research study for improving the accuracy and reliability of ICD-9-CM coding. A panel of independent coding experts interacted methodically to determine that the three criteria to identify a problematic ICD-9-CM subcategory for further study were cost, volume, and level of coding confusion caused. The Medicare Provider Analysis and Review (MEDPAR) 2007 fiscal year data set as well as suggestions from the experts were used to identify coding subcategories based on cost and volume data. Next, the panelists performed two rounds of independent ranking before identifying Excisional Debridement as the subcategory that causes the most confusion among coders. As a result, they recommended it for further study aimed at improving coding accuracy and variation. This framework can be adopted at different levels for similar studies in need of a schema for determining problematic subcategories of code sets. PMID:21796264

  8. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  9. New modified multi-level residue harmonic balance method for solving nonlinearly vibrating double-beam problem

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Saifur; Lee, Yiu-Yin

    2017-10-01

    In this study, a new modified multi-level residue harmonic balance method is presented and adopted to investigate the forced nonlinear vibrations of axially loaded double beams. Although numerous nonlinear beam or linear double-beam problems have been tackled and solved, there have been few studies of this nonlinear double-beam problem. The geometric nonlinear formulations for a double-beam model are developed. The main advantage of the proposed method is that a set of decoupled nonlinear algebraic equations is generated at each solution level. This heavily reduces the computational effort compared with solving the coupled nonlinear algebraic equations generated in the classical harmonic balance method. The proposed method can generate the higher-level nonlinear solutions that are neglected by the previous modified harmonic balance method. The results from the proposed method agree reasonably well with those from the classical harmonic balance method. The effects of damping, axial force, and excitation magnitude on the nonlinear vibrational behaviour are examined.

  10. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  11. Optimization of pressure settings during adaptive servo-ventilation support using real-time heart rate variability assessment: initial case report.

    PubMed

    Imamura, Teruhiko; Nitta, Daisuke; Kinugawa, Koichiro

    2017-01-05

    Adaptive servo-ventilation (ASV) therapy is a recent non-invasive positive pressure ventilation therapy that was developed for patients with heart failure (HF) refractory to optimal medical therapy. However, it is likely that ASV therapy at relatively higher pressure setting worsens some of the patients' prognosis compared with optimal medical therapy. Therefore, identification of optimal pressure settings of ASV therapy is warranted. We present the case of a 42-year-old male with HF, which was caused by dilated cardiomyopathy, who was admitted to our institution for evaluating his eligibility for heart transplantation. To identify the optimal pressure setting [peak end-expiratory pressure (PEEP) ramp test], we performed an ASV support test, during which the PEEP settings were set at levels ranging from 4 to 8 mmHg, and a heart rate variability (HRV) analysis using the MemCalc power spectral density method. Clinical parameters varied dramatically during the PEEP ramp test. Over incremental PEEP levels, pulmonary capillary wedge pressure, cardiac index and high-frequency level (reflecting parasympathetic activity) decreased; however, the low-frequency level increased along with increase in plasma noradrenaline concentrations. An inappropriately high PEEP setting may stimulate sympathetic nerve activity accompanied by decreased cardiac output. This was the first report on the PEEP ramp test during ASV therapy. Further research is warranted to determine whether use of optimal pressure settings using HRV analyses may improve the long-term prognosis of such patients.

  12. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  13. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    PubMed

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  14. Method for stabilizing low-level mixed wastes at room temperature

    DOEpatents

    Wagh, A.S.; Singh, D.

    1997-07-08

    A method to stabilize solid and liquid waste at room temperature is provided comprising combining solid waste with a starter oxide to obtain a powder, contacting the powder with an acid solution to create a slurry, said acid solution containing the liquid waste, shaping the now-mixed slurry into a predetermined form, and allowing the now-formed slurry to set. The invention also provides for a method to encapsulate and stabilize waste containing cesium comprising combining the waste with Zr(OH){sub 4} to create a solid-phase mixture, mixing phosphoric acid with the solid-phase mixture to create a slurry, subjecting the slurry to pressure; and allowing the now pressurized slurry to set. Lastly, the invention provides for a method to stabilize liquid waste, comprising supplying a powder containing magnesium, sodium and phosphate in predetermined proportions, mixing said powder with the liquid waste, such as tritium, and allowing the resulting slurry to set. 4 figs.

  15. Method for stabilizing low-level mixed wastes at room temperature

    DOEpatents

    Wagh, Arun S.; Singh, Dileep

    1997-01-01

    A method to stabilize solid and liquid waste at room temperature is provided comprising combining solid waste with a starter oxide to obtain a powder, contacting the powder with an acid solution to create a slurry, said acid solution containing the liquid waste, shaping the now-mixed slurry into a predetermined form, and allowing the now-formed slurry to set. The invention also provides for a method to encapsulate and stabilize waste containing cesium comprising combining the waste with Zr(OH).sub.4 to create a solid-phase mixture, mixing phosphoric acid with the solid-phase mixture to create a slurry, subjecting the slurry to pressure; and allowing the now pressurized slurry to set. Lastly, the invention provides for a method to stabilize liquid waste, comprising supplying a powder containing magnesium, sodium and phosphate in predetermined proportions, mixing said powder with the liquid waste, such as tritium, and allowing the resulting slurry to set.

  16. Validation of Single and Pooled Manure Drag Swabs for the Detection of Salmonella Serovar Enteritidis in Commercial Poultry Houses.

    PubMed

    Kinde, Hailu; Goodluck, Helen A; Pitesky, Maurice; Friend, Tom D; Campbell, James A; Hill, Ashley E

    2015-12-01

    Single swabs (cultured individually) are currently used in the Food and Drug Administration (FDA) official method for sampling the environment of commercial laying hens for the detection of Salmonella enterica ssp. serovar Enteritidis (Salmonella Enteritidis). The FDA has also granted provisional acceptance of the National Poultry Improvement Plan's (NPIP) Salmonella isolation and identification methodology for samples taken from table-egg layer flock environments. The NPIP method, as with the FDA method, requires single-swab culturing for the environmental sampling of laying houses for Salmonella Enteritidis. The FDA culture protocol requires a multistep culture enrichment broth, and it is more labor intensive than the NPIP culture protocol, which requires a single enrichment broth. The main objective of this study was to compare the FDA single-swab culturing protocol with that of the NPIP culturing protocol but using a four-swab pool scheme. Single and multi-laboratory testing of replicate manure drag swab sets (n  =  525 and 672, respectively) collected from a Salmonella Enteritidis-free commercial poultry flock was performed by artificially contaminating swabs with either Salmonella Enteritidis phage type 4, 8, or 13a at one of two inoculation levels: low, x¯  = 2.5 CFU (range 2.5-2.7), or medium, x¯  = 10.0 CFU (range 7.5-12). For each replicate, a single swab (inoculated), sets of two swabs (one inoculated and one uninoculated), and sets of four swabs (one inoculated and three uninoculated), testing was conducted using the FDA or NPIP culture method. For swabs inoculated with phage type 8, the NPIP method was more efficient (P < 0.05) for all swab sets at both inoculation levels than the reference method. The single swabs in the NPIP method were significantly (P < 0.05) better than four-pool swabs in detecting Salmonella Enteritidis at the lower inoculation level. In the collaborative study (n  =  13 labs) using Salmonella Enteritidis phage type 13a inoculated swabs, there was no significant difference (P > 0.05) between the FDA method (single swabs) and the pooled NPIP method (four-pool swabs). The study concludes that the pooled NPIP method is not significantly different from the FDA method for the detection of Salmonella Enteritidis in drag swabs in commercial poultry laying houses. Consequently based on the FDA's Salmonella Enteritidis rule for equivalency of different methods, the pooled NPIP method should be considered equivalent. Furthermore, the pooled NPIP method was more efficient and cost effective.

  17. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  18. Basis set and electron correlation effects on the polarizability and second hyperpolarizability of model open-shell π-conjugated systems

    NASA Astrophysics Data System (ADS)

    Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi

    2005-03-01

    The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.

  19. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  20. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  1. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  2. Multi-Level Bitmap Indexes for Flash Memory Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Madduri, Kamesh; Canon, Shane

    2010-07-23

    Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less

  3. An Attribute-Treatment Interaction Study: Lexical-Set versus Semantically-Unrelated Vocabulary Instruction

    ERIC Educational Resources Information Center

    Hashemi, Mohammad Reza; Gowdasiaei, Farah

    2005-01-01

    The purpose of the current study was (a) to assess the effectiveness of the lexical-set (LS) and the semantically-unrelated (SU) vocabulary instruction, separately and relative to each other, and (b) to assess the differential effects of the two methods for students of lower and upper English proficiency levels. Two intact EFL classes were…

  4. Evaluating Different Standard-Setting Methods in an ESL Placement Testing Context

    ERIC Educational Resources Information Center

    Shin, Sun-Young; Lidster, Ryan

    2017-01-01

    In language programs, it is crucial to place incoming students into appropriate levels to ensure that course curriculum and materials are well targeted to their learning needs. Deciding how and where to set cutscores on placement tests is thus of central importance to programs, but previous studies in educational measurement disagree as to which…

  5. A Comparison of Traditional and Engaging Lecture Methods in a Large, Professional-Level Course

    ERIC Educational Resources Information Center

    Miller, Cynthia J.; McNear, Jacquee; Metz, Michael J.

    2013-01-01

    In engaging lectures, also referred to as broken or interactive lectures, students are given short periods of lecture followed by "breaks" that can consist of 1-min papers, problem sets, brainstorming sessions, or open discussion. While many studies have shown positive effects when engaging lectures are used in undergraduate settings,…

  6. Monitoring visitor use in backcountry and wilderness: a review of methods

    Treesearch

    Steven J. Hollenhorst; Steven A. Whisman; Alan W. Ewert

    1992-01-01

    Obtaining accurate and usable visitor counts in backcountry and wilderness settings continues to be problematic for resource managers because use of these areas is dispersed and costs can be prohibitively high. An overview of the available methods for obtaining reliable data on recreation use levels is provided. Monitoring methods were compared and selection criteria...

  7. Test pattern generation for ILA sequential circuits

    NASA Technical Reports Server (NTRS)

    Feng, YU; Frenzel, James F.; Maki, Gary K.

    1993-01-01

    An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.

  8. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    PubMed

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0.16-0.33; best kappa = 0.29, 0.23-0.35; child: best CCC = 0.40, 0.19-0.45; best kappa = 0.29, 0.07-0.35). Expert algorithms in a hierarchy offer an accessible, automated method for assigning VA causes of death. Overall population-level accuracy is similar to that of more complex machine learning methods, but without need for a training data set from a prior validation study.

  9. Ordering of the O-O stretching vibrational frequencies in ozone

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III

    1989-01-01

    The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.

  10. Aligning corporate greenhouse-gas emissions targets with climate goals

    NASA Astrophysics Data System (ADS)

    Krabbe, Oskar; Linthorst, Giel; Blok, Kornelis; Crijns-Graus, Wina; van Vuuren, Detlef P.; Höhne, Niklas; Faria, Pedro; Aden, Nate; Pineda, Alberto Carrillo

    2015-12-01

    Corporate climate action is increasingly considered important in driving the transition towards a low-carbon economy. For this, it is critical to ensure translation of global goals to greenhouse-gas (GHG) emissions reduction targets at company level. At the moment, however, there is a lack of clear methods to derive consistent corporate target setting that keeps cumulative corporate GHG emissions within a specific carbon budget (for example, 550-1,300 GtCO2 between 2011 and 2050 for the 2 °C target). Here we propose a method for corporate emissions target setting that derives carbon intensity pathways for companies based on sectoral pathways from existing mitigation scenarios: the Sectoral Decarbonization Approach (SDA). These company targets take activity growth and initial performance into account. Next to target setting on company level, the SDA can be used by companies, policymakers, investors or other stakeholders as a benchmark for tracking corporate climate performance and actions, providing a mechanism for corporate accountability.

  11. Assessement of serum amyloid A levels in the rehabilitation setting in the Florida manatee (Trichechus manatus latirostris).

    PubMed

    Cray, Carolyn; Dickey, Meranda; Brewer, Leah Brinson; Arheart, Kristopher L

    2013-12-01

    The acute phase protein serum amyloid A (SAA) has been previously shown to have value as a biomarker of inflammation and infection in many species, including manatees (Trichechus manatus latirostris). In the current study, results from an automated assay for SAA were used in a rehabilitation setting. Reference intervals were established from clinically normal manatees using the robust method: 0-46 mg/L. More than 30-fold higher mean SAA levels were observed in manatees suffering from cold stress and boat-related trauma. Poor correlations were observed between SAA and total white blood count, percentage of neutrophils, albumin, and albumin/globulin ratio. A moderate correlation was observed between SAA and the presence of nucleated red blood cells. The sensitivity of SAA testing was 93% and the specificity was 98%, representing the highest combined values of all the analytes. The results indicate that the automated method for SAA quantitation can provide important clinical data for manatees in a rehabilitation setting.

  12. Relating constrained motion to force through Newton's second law

    NASA Astrophysics Data System (ADS)

    Roithmayr, Carlos M.

    When a mechanical system is subject to constraints its motion is in some way restricted. In accordance with Newton's second law, motion is a direct result of forces acting on a system; hence, constraint is inextricably linked to force. The presence of a constraint implies the application of particular forces needed to compel motion in accordance with the constraint; absence of a constraint implies the absence of such forces. The objective of this thesis is to formulate a comprehensive, consistent, and concise method for identifying a set of forces needed to constrain the behavior of a mechanical system modeled as a set of particles and rigid bodies. The goal is accomplished in large part by expressing constraint equations in vector form rather than entirely in terms of scalars. The method developed here can be applied whenever constraints can be described at the acceleration level by a set of independent equations that are linear in acceleration. Hence, the range of applicability extends to servo-constraints or program constraints described at the velocity level with relationships that are nonlinear in velocity. All configuration constraints, and an important class of classical motion constraints, can be expressed at the velocity level by using equations that are linear in velocity; therefore, the associated constraint equations are linear in acceleration when written at the acceleration level. Two new approaches are presented for deriving equations governing motion of a system subject to constraints expressed at the velocity level with equations that are nonlinear in velocity. By using partial accelerations instead of the partial velocities normally employed with Kane's method, it is possible to form dynamical equations that either do or do not contain evidence of the constraint forces, depending on the analyst's interests.

  13. Development of a new control device for stabilizing blood level in reservoir during extracorporeal circulation.

    PubMed

    Momose, Naoki; Yamakoshi, Rie; Kokubo, Ryo; Yasuda, Toru; Iwamoto, Norio; Umeda, Chinori; Nakajima, Itsuro; Yanagisawa, Mitsunobu; Tomizawa, Yasuko

    2010-03-01

    We developed a simple device that stabilizes the blood level in the reservoir of the extracorporeal circulation open circuit system by measuring the hydrostatic pressure of the reservoir to control the flow rate of the arterial pump. When the flow rate of the venous return decreases, the rotation speed of the arterial pump is automatically slowed down. Consequently, the blood level in the reservoir is stabilized quickly between two arbitrarily set levels and never falls below the pre-set low level. We conducted a basic experiment to verify the operation of the device, using a mock circuit with water. Commercially available pumps and reservoir were used without modification. The results confirmed that the control method effectively regulates the reservoir liquid level and is highly reliable. The device possibly also functions as a safety device.

  14. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  15. Unsupervised MRI segmentation of brain tissues using a local linear model and level set.

    PubMed

    Rivest-Hénault, David; Cheriet, Mohamed

    2011-02-01

    Real-world magnetic resonance imaging of the brain is affected by intensity nonuniformity (INU) phenomena which makes it difficult to fully automate the segmentation process. This difficult task is accomplished in this work by using a new method with two original features: (1) each brain tissue class is locally modeled using a local linear region representative, which allows us to account for the INU in an implicit way and to more accurately position the region's boundaries; and (2) the region models are embedded in the level set framework, so that the spatial coherence of the segmentation can be controlled in a natural way. Our new method has been tested on the ground-truthed Internet Brain Segmentation Repository (IBSR) database and gave promising results, with Tanimoto indexes ranging from 0.61 to 0.79 for the classification of the white matter and from 0.72 to 0.84 for the gray matter. To our knowledge, this is the first time a region-based level set model has been used to perform the segmentation of real-world MRI brain scans with convincing results. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Developing students’ learning achievement and experimental skills on buoyancy and the involvement of Newton’s third law through experimental sets

    NASA Astrophysics Data System (ADS)

    Chokchai, Jaewijarn; Khanti, Toedtanya; Sura, Wuttiprom

    2017-09-01

    The purposes of this research were: to construct packages of operations on buoyancy and the involvement of Newton’s third law, to enhance achievement score of students on buoyancy and the involvement of Newton’s third law, to enhance experimental skills on buoyancy and the involvement of Newton’s third law and to evaluate students’ attitude towards the packages of operations on buoyancy and the involvement of Newton’s third law using inquiry method. The samples were 42 grade 11 students in academic year 2016 at Hatyaiwittayalai School, Hatyai, Songkhla. The research method was one group pretest-posttest design. The research tools consisted of experimental set on buoyancy and the involvement of Newton’s third law, the learning achievement test on buoyancy and the involvement of Newton’s third law and the students’ attitude questionnaires. The experimental skills of most students was in a good level . The satisfaction of most students was in a good level. The research showed the learning achievement after instruction higher than that before instruction using experimental set at the significant level of 0.05 and the class average normalized gain is in the medium gain

  17. Evaluation of airborne asbestos exposure from routine handling of asbestos-containing wire gauze pads in the research laboratory.

    PubMed

    Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D

    2018-07-01

    Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.

  18. Superconvergent second order Cartesian method for solving free boundary problem for invadopodia formation

    NASA Astrophysics Data System (ADS)

    Gallinato, Olivier; Poignard, Clair

    2017-06-01

    In this paper, we present a superconvergent second order Cartesian method to solve a free boundary problem with two harmonic phases coupled through the moving interface. The model recently proposed by the authors and colleagues describes the formation of cell protrusions. The moving interface is described by a level set function and is advected at the velocity given by the gradient of the inner phase. The finite differences method proposed in this paper consists of a new stabilized ghost fluid method and second order discretizations for the Laplace operator with the boundary conditions (Dirichlet, Neumann or Robin conditions). Interestingly, the method to solve the harmonic subproblems is superconvergent on two levels, in the sense that the first and second order derivatives of the numerical solutions are obtained with the second order of accuracy, similarly to the solution itself. We exhibit numerical criteria on the data accuracy to get such properties and numerical simulations corroborate these criteria. In addition to these properties, we propose an appropriate extension of the velocity of the level-set to avoid any loss of consistency, and to obtain the second order of accuracy of the complete free boundary problem. Interestingly, we highlight the transmission of the superconvergent properties for the static subproblems and their preservation by the dynamical scheme. Our method is also well suited for quasistatic Hele-Shaw-like or Muskat-like problems.

  19. A Comparison of the Capability of Sensitivity Level 3 and Sensitivity Level 4 Fluorescent Penetrants to Detect Fatigue Cracks in Various Metals

    NASA Technical Reports Server (NTRS)

    Parker, Bradford H.

    2011-01-01

    In April 2008, NASA-STD-5009 established a requirement that only sensitivity level 4 penetrants are acceptable for NASA Standard Level liquid penetrant inspections. Having NASA contractors change existing processes or perform demonstration tests to certify sensitivity level 3 penetrants posed a potentially huge cost to the Agency. This study was conducted to directly compare the probability of detection (POD) of sensitivity level 3 and level 4 penetrants using both Method A and Method D inspection processes. POD demonstration tests were performed on 6061-Al, Haynes 188 and Ti-6Al-4V crack panel sets. The study results strongly support the conclusion that sensitivity level 3 penetrants are acceptable for NASA Standard Level inspections.

  20. GSOD Based Daily Global Mean Surface Temperature and Mean Sea Level Air Pressure (1982-2011)

    DOE Data Explorer

    Xuan Shi, Dali Wang

    2014-05-05

    This data product contains all the gridded data set at 1/4 degree resolution in ASCII format. Both mean temperature and mean sea level air pressure data are available. It also contains the GSOD data (1982-2011) from NOAA site, contains station number, location, temperature and pressures (sea level and station level). The data package also contains information related to the data processing methods

  1. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  2. Cost comparison of methods for preparation of neonatal red cell aliquots.

    PubMed

    Lechuga, Diana; Thompson, Christina

    2007-01-01

    The purpose of this study was to compare the preparation costs of two common methods used for neonatal red blood cell transfusion aliquots. Three months of data from a Level 2 and Level 3 neonatal intensive care unit (NICU) were used to determine the comparative cost for red cell aliquot transfusions using an eight bag aliquot/transfer system or the syringe set system. Using leuko-poor red blood cell blood collected in Adsol and containing approximately 320 ml of red blood cells and supernatant solution, the average cost of neonatal transfusion aliquots was determined using the Charter Medical syringe set and the Charter Medical eight bag aliquot/transfer system. A total of 126 red blood cell transfusion aliquots were used over the three month period. The amount transfused with each aliquot ranged from 5.0 ml - 55.0 ml with an average of 24.0 ml per aliquot. The cost per aliquot using the eight aliquot/transfer set was calculated as $36.25 and the cost per aliquot using the syringe set cost was calculated as $30.71. Additional benefits observed with the syringe set included decreased blood waste. When comparing Charter Medical multiple aliquot bag sets and the Charter Medical syringe aliquot system to provide neonatal transfusions, the use of the syringe system decreased blood waste and proved more cost effective.

  3. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  4. Basis set construction for molecular electronic structure theory: natural orbital and Gauss-Slater basis for smooth pseudopotentials.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2011-02-14

    A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.

  5. Satellite observed thermodynamics during FGGE

    NASA Technical Reports Server (NTRS)

    Smith, W. L.

    1985-01-01

    During the First Global Atmospheric Research Program (GARP) Global Experiment (FGGE), determinations of temperature and moisture were made from TIROS-N and NOAA-6 satellite infrared and microwave sounding radiance measurements. The data were processed by two methods differing principally in their horizontal resolution. At the National Earth Satellite Service (NESS) in Washington, D.C., the data were produced operationally with a horizontal resolution of 250 km for inclusion in the FGGE Level IIb data sets for application to large-scale numerical analysis and prediction models. High horizontal resolution (75 km) sounding data sets were produced using man-machine interactive methods for the special observing periods of FGGE at the NASA/Goddard Space Flight Center and archived as supplementary Level IIb. The procedures used for sounding retrieval and the characteristics and quality of these thermodynamic observations are given.

  6. Preliminary Study on Appearance-Based Detection of Anatomical Point Landmarks in Body Trunk CT Images

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni

    Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.

  7. The Hartree-Fock calculation of the magnetic properties of molecular solutes

    NASA Astrophysics Data System (ADS)

    Cammi, R.

    1998-08-01

    In this paper we set the formal bases for the calculation of the magnetic susceptibility and of the nuclear magnetic shielding tensors for molecular solutes described within the framework of the polarizable continuum model (PCM). The theory has been developed at self-consistent field (SCF) level and adapted to be used within the framework of some of the computational procedures of larger use, i.e., the gauge invariant atomic orbital method (GIAO) and the continuous set gauge transformation method (CSGT). The numerical results relative to the magnetizabilities and chemical shielding of acetonitrile and nitrometane in various solvents computed with the PCM-CSGT method are also presented.

  8. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  9. A comparative study of two hazard handling training methods for novice drivers.

    PubMed

    Wang, Y B; Zhang, W; Salvendy, G

    2010-10-01

    The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.

  10. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  11. Semantic classification of business images

    NASA Astrophysics Data System (ADS)

    Erol, Berna; Hull, Jonathan J.

    2006-01-01

    Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.

  12. Characterization of background concentrations of contaminants using a mixture of normal distributions.

    PubMed

    Qian, Song S; Lyons, Regan E

    2006-10-01

    We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.

  13. The Explication of Quality Standards in Self-Evaluation

    ERIC Educational Resources Information Center

    Bronkhorst, Larike H.; Baartman, Liesbeth K. J.; Stokking, Karel M.

    2012-01-01

    Education aiming at students' competence development asks for new assessment methods. The quality of these methods needs to be assured using adapted quality criteria and accompanying standards. As such standards are not widely available, this study sets out to examine what level of compliance with quality criteria stakeholders consider…

  14. Statistical Process Control: Going to the Limit for Quality.

    ERIC Educational Resources Information Center

    Training, 1987

    1987-01-01

    Defines the concept of statistical process control, a quality control method used especially in manufacturing. Generally, concept users set specific standard levels that must be met. Makes the point that although employees work directly with the method, management is responsible for its success within the plant. (CH)

  15. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    PubMed

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  16. Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2014-03-01

    We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.

  17. Advancement of Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Emiley, Mark S.; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    2000-01-01

    Bi-Level Integrated System Synthesis (BLISS) is a method for optimization of an engineering system, e.g., an aerospace vehicle. BLISS consists of optimizations at the subsystem (module) and system levels to divide the overall large optimization task into sets of smaller ones that can be executed concurrently. In the initial version of BLISS that was introduced and documented in previous publications, analysis in the modules was kept at the early conceptual design level. This paper reports on the next step in the BLISS development in which the fidelity of the aerodynamic drag and structural stress and displacement analyses were upgraded while the method's satisfactory convergence rate was retained.

  18. National smokefree law in New Zealand improves air quality inside bars, pubs and restaurants

    PubMed Central

    Wilson, Nick; Edwards, Richard; Maher, Anthony; Näthe, Jenny; Jalali, Rafed

    2007-01-01

    Background: We aimed to: (i) assess compliance with a new smokefree law in a range of hospitality settings; and (ii) to assess the impact of the new law by measuring air quality and making comparisons with air quality in outdoor smoking areas and with international data from hospitality settings. Methods: We included 34 pubs, restaurants and bars, 10 transportation settings, nine other indoor settings, six outdoor smoking areas of bars and restaurants, and six other outdoor settings. These were selected using a mix of random, convenience and purposeful sampling. The number of lit cigarettes among occupants at defined time points in each venue was observed and a portable real-time aerosol monitor was used to measure fine particulate levels (PM2.5). Results: No smoking was observed during the data collection periods among over 3785 people present in the indoor venues, nor in any of the transportation settings. The levels of fine particulates were relatively low inside the bars, pubs and restaurants in the urban and rural settings (mean 30-minute level = 16 μg/m3 for 34 venues; range of mean levels for each category: 13 μg/m3 to 22 μg/m3). The results for other smokefree indoor settings (shops, offices etc) and for smokefree transportation settings (eg, buses, trains, etc) were even lower. However, some "outdoor" smoking areas attached to bars/restaurants had high levels of fine particulates, especially those that were partly enclosed (eg, up to a 30-minute mean value of 182 μg/m3 and a peak of maximum value of 284 μg/m3). The latter are far above WHO guideline levels for 24-hour exposure (ie, 25μg/m3). Conclusion: There was very high compliance with the new national smokefree law and this was also reflected by the relatively good indoor air quality in hospitality settings (compared to the "outdoor" smoking areas and the comparable settings in countries that permit indoor smoking). Nevertheless, adopting enhanced regulations (as used in various US and Canadian jurisdictions) may be needed to address hazardous air quality in relatively enclosed "outdoor" smoking areas. PMID:17511877

  19. Hesitant fuzzy linguistic multicriteria decision-making method based on generalized prioritized aggregation operator.

    PubMed

    Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2014-01-01

    Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  1. Detecting Intrajudge Inconsistency in Standard Setting Using Test Items with a Selected-Response Format. Research Report.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Vos, Hans J.; Chang, Lei

    In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…

  2. A Comparison of Methods of Vertical Equating.

    ERIC Educational Resources Information Center

    Loyd, Brenda H.; Hoover, H. D.

    Rasch model vertical equating procedures were applied to three mathematics computation tests for grades six, seven, and eight. Each level of the test was composed of 45 items in three sets of 15 items, arranged in such a way that tests for adjacent grades had two sets (30 items) in common, and the sixth and eighth grades had 15 items in common. In…

  3. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  4. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  5. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  6. Discovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em

    2017-11-01

    The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.

  7. Description and evaluation of a peracetic acid air sampling and analysis method.

    PubMed

    Nordling, John; Kinsky, Owen R; Osorio, Magdalena; Pechacek, Nathan

    2017-12-01

    Peracetic acid (PAA) is a corrosive chemical with a pungent odor, which is extensively used in occupational settings and causes various health hazards in exposed workers. Currently, there is no US government agency recommended method that could be applied universally for the sampling and analysis of PAA. Legacy methods for determining airborne PAA vapor levels frequently suffered from cross-reactivity with other chemicals, particularly hydrogen peroxide (H 2 O 2 ). Therefore, to remove the confounding factor of cross-reactivity, a new viable, sensitive method was developed for assessment of PAA exposure levels, based on the differential reaction kinetics of PAA with methyl p-tolylsulfide (MTS), relative to H 2 O 2 , to preferentially derive methyl p-tolysulfoxide (MTSO). By quantifying MTSO concentration produced in the liquid capture solution from an air sampler, using an internal standard, and utilizing the reaction stoichiometry of PAA and MTS, the original airborne concentration of PAA is determined. After refining this liquid trap high-performance liquid chromatography (HPLC) method in the laboratory, it was tested in five workplace settings where PAA products were used. PAA levels ranged from the detection limit of 0.013 parts per million (ppm) to 0.4 ppm. The results indicate a viable and potentially dependable method to assess the concentrations of PAA vapors under occupational exposure scenarios, though only a small number of field measurements were taken while field testing this method. However, the low limit of detection and precision offered by this method makes it a strong candidate for further testing and validation to expand the uses of this liquid trap HPLC method.

  8. Evaluation of Athletic Training Students' Clinical Proficiencies

    PubMed Central

    Walker, Stacy E; Weidner, Thomas G; Armstrong, Kirk J

    2008-01-01

    Context: Appropriate methods for evaluating clinical proficiencies are essential in ensuring entry-level competence. Objective: To investigate the common methods athletic training education programs use to evaluate student performance of clinical proficiencies. Design: Cross-sectional design. Setting: Public and private institutions nationwide. Patients or Other Participants: All program directors of athletic training education programs accredited by the Commission on Accreditation of Allied Health Education Programs as of January 2006 (n  =  337); 201 (59.6%) program directors responded. Data Collection and Analysis: The institutional survey consisted of 11 items regarding institutional and program demographics. The 14-item Methods of Clinical Proficiency Evaluation in Athletic Training survey consisted of respondents' demographic characteristics and Likert-scale items regarding clinical proficiency evaluation methods and barriers, educational content areas, and clinical experience settings. We used analyses of variance and independent t tests to assess differences among athletic training education program characteristics and the barriers, methods, content areas, and settings regarding clinical proficiency evaluation. Results: Of the 3 methods investigated, simulations (n  =  191, 95.0%) were the most prevalent method of clinical proficiency evaluation. An independent-samples t test revealed that more opportunities existed for real-time evaluations in the college or high school athletic training room (t189  =  2.866, P  =  .037) than in other settings. Orthopaedic clinical examination and diagnosis (4.37 ± 0.826) and therapeutic modalities (4.36 ± 0.738) content areas were scored the highest in sufficient opportunities for real-time clinical proficiency evaluations. An inadequate volume of injuries or conditions (3.99 ± 1.033) and injury/condition occurrence not coinciding with the clinical proficiency assessment timetable (4.06 ± 0.995) were barriers to real-time evaluation. One-way analyses of variance revealed no difference between athletic training education program characteristics and the opportunities for and barriers to real-time evaluations among the various clinical experience settings. Conclusions: No one primary barrier hindered real-time clinical proficiency evaluation. To determine athletic training students' clinical proficiency for entry-level employment, athletic training education programs must incorporate standardized patients or take a disciplined approach to using simulation for instruction and evaluation. PMID:18668172

  9. Screening Test Items for Differential Item Functioning

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2014-01-01

    A method for medical screening is adapted to differential item functioning (DIF). Its essential elements are explicit declarations of the level of DIF that is acceptable and of the loss function that quantifies the consequences of the two kinds of inappropriate classification of an item. Instead of a single level and a single function, sets of…

  10. Assessing Long-Term Outcomes of an Intervention Designed for Pregnant Incarcerated Women

    ERIC Educational Resources Information Center

    Kubiak, Sheryl Pimlott; Kasiborski, Natalie; Schmittel, Emily

    2010-01-01

    Objectives: Approximately 25% of women are pregnant or postpartum when they enter prison. This study assesses a system-level intervention that prevented the separation of mothers and infants at birth, allowing them to reside together in an alternative community setting. Method: Longitudinal analysis of several state-level administrative databases…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu; Sun, Hui; Cheng, Li-Tien

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. Wemore » also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach.« less

  12. Repeated 3-minute stair climbing-descending exercise after a meal over 2 weeks increases serum 1,5-anhydroglucitol levels in people with type 2 diabetes

    PubMed Central

    Honda, Hiroto; Igaki, Makoto; Hatanaka, Yuki; Komatsu, Motoaki; Tanaka, Shin-Ichiro; Miki, Tetsuo; Matsuki, Yumika; Takaishi, Tetsuo; Hayashi, Tatsuya

    2017-01-01

    [Purpose] The purpose of this study was to examine the hypoglycemic effect of a postprandial exercise program using brief stair climbing-descending exercise in people with type 2 diabetes. [Subjects and Methods] Seven males with uncomplicated type 2 diabetes (age 68.0 ± 3.7 years) performed two sets of stair climbing-descending exercise 60 and 120 min after each meal for the first 2 weeks but not for the following 2 weeks. Each set of exercise comprised 3-min of continuous repetition of climbing briskly to the second floor followed by slow waking down to the first floor in their home. A rest period of 1–2 min was allowed between each set. [Results] Serum 1,5-anhydroglucitol level was significantly higher by 11.5% at the end of the 2-week exercise period than at the baseline. By contrast, the 1,5-anhydroglucitol level at the end of the following 2-week period did not differ from the baseline value. Fasting blood glucose level and insulin resistance index at the end of the exercise period did not differ from the baseline value. [Conclusion] Repeated 3-min bouts of stair climbing-descending exercise after a meal may be a promising method for improving postprandial glycemic control in people with type 2 diabetes. PMID:28210043

  13. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  14. The surface elevation table: marker horizon method for measuring wetland accretion and elevation dynamics

    USGS Publications Warehouse

    Callaway, John C.; Cahoon, Donald R.; Lynch, James C.

    2014-01-01

    Tidal wetlands are highly sensitive to processes that affect their elevation relative to sea level. The surface elevation table–marker horizon (SET–MH) method has been used to successfully measure these processes, including sediment accretion, changes in relative elevation, and shallow soil processes (subsidence and expansion due to root production). The SET–MH method is capable of measuring changes at very high resolution (±millimeters) and has been used worldwide both in natural wetlands and under experimental conditions. Marker horizons are typically deployed using feldspar over 50- by 50-cm plots, with replicate plots at each sampling location. Plots are sampled using a liquid N2 cryocorer that freezes a small sample, allowing the handling and measurement of soft and easily compressed soils with minimal compaction. The SET instrument is a portable device that is attached to a permanent benchmark to make high-precision measurements of wetland surface elevation. The SET instrument has evolved substantially in recent decades, and the current rod SET (RSET) is widely used. For the RSET, a 15-mm-diameter stainless steel rod is pounded into the ground until substantial resistance is achieved to establish a benchmark. The SET instrument is attached to the benchmark and leveled such that it reoccupies the same reference plane in space, and pins lowered from the instrument repeatedly measure the same point on the soil surface. Changes in the height of the lowered pins reflect changes in the soil surface. Permanent or temporary platforms provide access to SET and MH locations without disturbing the wetland surface.

  15. Alarm characterization for continuous glucose monitors used as adjuncts to self-monitoring of blood glucose.

    PubMed

    McGarraugh, Geoffrey

    2010-01-01

    Continuous glucose monitoring (CGM) devices available in the United States are approved for use as adjuncts to self-monitoring of blood glucose (SMBG). Alarm evaluation in the Clinical and Laboratory Standards Institute (CLSI) guideline for CGM does not specifically address devices that employ both CGM and SMBG. In this report, an alarm evaluation method is proposed for these devices. The proposed method builds on the CLSI method using data from an in-clinic study of subjects with type 1 diabetes. CGM was used to detect glycemic events, and SMBG was used to determine treatment. To optimize detection of a single glucose level, such as 70 mg/dl, a range of alarm threshold settings was evaluated. The alarm characterization provides a choice of alarm settings that trade off detection and false alarms. Detection of a range of high glucose levels was similarly evaluated. Using low glucose alarms, detection of 70 mg/dl within 30 minutes increased from 64 to 97% as alarm settings increased from 70 to 100 mg/dl, and alarms that did not require treatment (SMBG >85 mg/dl) increased from 18 to 52%. Using high glucose alarms, detection of 180 mg/dl within 30 minutes increased from 87 to 96% as alarm settings decreased from 180 to 165 mg/dl, and alarms that did not require treatment (SMBG <180 mg/dl) increased from 24 to 42%. The proposed alarm evaluation method provides information for choosing appropriate alarm thresholds and reflects the clinical utility of CGM alarms. 2010 Diabetes Technology Society.

  16. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  17. Shared decision making within goal setting in rehabilitation settings: A systematic review.

    PubMed

    Rose, Alice; Rosewilliam, Sheeba; Soundy, Andrew

    2017-01-01

    To map out and synthesise literature that considers the extent of shared decision-making (SDM) within goal-setting in rehabilitation settings and explore participants' views of this approach within goal-setting. Four databases were systematically searched between January 2005-September 2015. All articles addressing SDM within goal-setting involving adult rehabilitation patients were included. The literature was critically appraised followed by a thematic synthesis. The search output identified 3129 studies and 15 articles met the inclusion criteria. Themes that emerged related to methods of SDM within goal-setting, participants' views on SDM, perceived benefits of SDM, barriers and facilitators to using SDM and suggestions to improve involvement of patients resulting in a better process of goal-setting. The literature showed various levels of patient involvement existing within goal-setting however few teams adopted an entirely patient-centred approach. However, since the review has identified clear value to consider SDM within goal-setting for rehabilitation, further research is required and practice should consider educating both clinicians and patients about this approach. To enhance the use of SDM within goal-setting in rehabilitation it is likely clinicians and patients will require further education on this approach. For clinicians this could commence during their training at undergraduate level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Structure and gas phase stability of complexes L . .. M, where M = Li+, Na+, Mg2+ and L is formaldehyde, formic acid, formate anion, formamide and their sila derivatives

    NASA Astrophysics Data System (ADS)

    Remko, Milan

    Ab initio SCF and DFT methods were used to characterize the gas-phase complexes of selected carbonyl and silacarbonyl bases with Li+ , Na+ and Mg2+ . Geometries were optimized at the Hartree-Fock ab initio and Becke 3LYP DFT levels with the 6-31G* basis set. Frequency computations were performed at the RHF/6-31G* level of theory. Interaction energies of the cation-coordinated systems also were determined with the MP2/6-31G* method. The effect of extension of basis set (to the 6-31+ G* basis) on the computed properties of anion-metal cation complexes was investigated. Calculated energies of formation vary as Mg2+ > Li+ > Na+ . The Becke 3LYP DFT binding energies were comparable with those obtained at the correlated MP2 level and are in good agreement with available experimental data.

  19. Transdisciplinary Research on Cancer-Healing Systems Between Biomedicine and the Maya of Guatemala: A Tool for Reciprocal Reflexivity in a Multi-Epistemological Setting.

    PubMed

    Berger-González, Mónica; Stauffacher, Michael; Zinsstag, Jakob; Edwards, Peter; Krütli, Pius

    2016-01-01

    Transdisciplinarity (TD) is a participatory research approach in which actors from science and society work closely together. It offers means for promoting knowledge integration and finding solutions to complex societal problems, and can be applied within a multiplicity of epistemic systems. We conducted a TD process from 2011 to 2014 between indigenous Mayan medical specialists from Guatemala and Western biomedical physicians and scientists to study cancer. Given the immense cultural gap between the partners, it was necessary to develop new methods to overcome biases induced by ethnocentric behaviors and power differentials. This article describes this intercultural cooperation and presents a method of reciprocal reflexivity (Bidirectional Emic-Etic tool) developed to overcome them. As a result of application, researchers observed successful knowledge integration at the epistemic level, the social-organizational level, and the communicative level throughout the study. This approach may prove beneficial to others engaged in facilitating participatory health research in complex intercultural settings. © The Author(s) 2015.

  20. Model potentials for main group elements Li through Rn

    NASA Astrophysics Data System (ADS)

    Sakai, Yoshiko; Miyoshi, Eisaku; Klobukowski, Mariusz; Huzinaga, Sigeru

    1997-05-01

    Model potential (MP) parameters and valence basis sets were systematically determined for the main group elements Li through Rn. For alkali and alkaline-earth metal atoms, the outermost core (n-1)p electrons were treated explicitly together with the ns valence electrons. For the remaining atoms, only the valence ns and np electrons were treated explicitly. The major relativistic effects at the level of Cowan and Griffin's quasi-relativistic Hartree-Fock method (QRHF) were incorporated in the MPs for all atoms heavier than Kr. The valence orbitals thus obtained have inner nodal structure. The reliability of the MP method was tested in calculations for X-, X, and X+ (X=Br, I, and At) at the SCF level and the results were compared with the corresponding values given by the numerical HF (or QRHF) calculations. Calculations that include electron correlation were done for X-, X, and X+ (X=Cl and Br) at the SDCI level and for As2 at the CASSCF and MRSDCI levels. These results were compared with those of all-electron (AE) calculations using the well-tempered basis sets. Close agreement between the MP and AE results was obtained at all levels of the treatment.

  1. Judgments of eye level in light and in darkness

    NASA Technical Reports Server (NTRS)

    Stoper, Arnold E.; Cohen, Malcolm M.

    1986-01-01

    Subjects judged eye level in the light and in the dark by raising and lowering themselves in a dental chair until a stationary target appeared to be at the level of their eyes. This method reduced the possibility of subjects' using visible landmarks as reference points for setting eye level during lighted trials, which may have contributed to artificially low estimates of the variability of this judgment in previous studies. Chair settings were 2.5 deg higher in the dark than in the light, and variability was approximately 66 percent greater in the dark than in the light. These results are discussed in terms of possible interactions of two separate systems, one sensitive to the orientations of visible surfaces and the other sensitive to bodily and gravitational information.

  2. Ethics and equity in research priority-setting: stakeholder engagement and the needs of disadvantaged groups.

    PubMed

    Bhaumik, Soumyadeep; Rana, Sangeeta; Karimkhani, Chante; Welch, Vivian; Armstrong, Rebecca; Pottie, Kevin; Dellavalle, Robert; Dhakal, Purushottam; Oliver, Sandy; Francis, Damian K; Nasser, Mona; Crowe, Sally; Aksut, Baran; Amico, Roberto D

    2015-01-01

    A transparent and evidence-based priority-setting process promotes the optimal use of resources to improve health outcomes. Decision-makers and funders have begun to increasingly engage representatives of patients and healthcare consumers to ensure that research becomes more relevant. However, disadvantaged groups and their needs may not be integrated into the priority-setting process since they do not have a "political voice" or are unable to organise into interest groups. Equitable priority-setting methods need to balance patient needs, values, experiences with population-level issues and issues related to the health system.

  3. An assessment of priority setting process and its implication on availability of emergency obstetric care services in Malindi District, Kenya.

    PubMed

    Nyandieka, Lilian Nyamusi; Kombe, Yeri; Ng'ang'a, Zipporah; Byskov, Jens; Njeru, Mercy Karimi

    2015-01-01

    In spite of the critical role of Emergency Obstetric Care in treating complications arising from pregnancy and childbirth, very few facilities are equipped in Kenya to offer this service. In Malindi, availability of EmOC services does not meet the UN recommended levels of at least one comprehensive and four basic EmOC facilities per 500,000 populations. This study was conducted to assess priority setting process and its implication on availability, access and use of EmOC services at the district level. A qualitative study was conducted both at health facility and community levels. Triangulation of data sources and methods was employed, where document reviews, in-depth interviews and focus group discussions were conducted with health personnel, facility committee members, stakeholders who offer and/ or support maternal health services and programmes; and the community members as end users. Data was thematically analysed. Limitations in the extent to which priorities in regard to maternal health services can be set at the district level were observed. The priority setting process was greatly restricted by guidelines and limited resources from the national level. Relevant stakeholders including community members are not involved in the priority setting process, thereby denying them the opportunity to contribute in the process. The findings illuminate that consideration of all local plans in national planning and budgeting as well as the involvement of all relevant stakeholders in the priority setting exercise is essential in order to achieve a consensus on the provision of emergency obstetric care services among other health service priorities.

  4. Incorporating molecular and functional context into the analysis and prioritization of human variants associated with cancer

    PubMed Central

    Peterson, Thomas A; Nehrt, Nathan L; Park, DoHwan

    2012-01-01

    Background and objective With recent breakthroughs in high-throughput sequencing, identifying deleterious mutations is one of the key challenges for personalized medicine. At the gene and protein level, it has proven difficult to determine the impact of previously unknown variants. A statistical method has been developed to assess the significance of disease mutation clusters on protein domains by incorporating domain functional annotations to assist in the functional characterization of novel variants. Methods Disease mutations aggregated from multiple databases were mapped to domains, and were classified as either cancer- or non-cancer-related. The statistical method for identifying significantly disease-associated domain positions was applied to both sets of mutations and to randomly generated mutation sets for comparison. To leverage the known function of protein domain regions, the method optionally distributes significant scores to associated functional feature positions. Results Most disease mutations are localized within protein domains and display a tendency to cluster at individual domain positions. The method identified significant disease mutation hotspots in both the cancer and non-cancer datasets. The domain significance scores (DS-scores) for cancer form a bimodal distribution with hotspots in oncogenes forming a second peak at higher DS-scores than non-cancer, and hotspots in tumor suppressors have scores more similar to non-cancers. In addition, on an independent mutation benchmarking set, the DS-score method identified mutations known to alter protein function with very high precision. Conclusion By aggregating mutations with known disease association at the domain level, the method was able to discover domain positions enriched with multiple occurrences of deleterious mutations while incorporating relevant functional annotations. The method can be incorporated into translational bioinformatics tools to characterize rare and novel variants within large-scale sequencing studies. PMID:22319177

  5. Snow route optimization.

    DOT National Transportation Integrated Search

    2016-01-01

    Route optimization is a method of creating a set of winter highway treatment routes to meet a range of targets, including : service level improvements, resource reallocation and changes to overriding constraints. These routes will allow the : operato...

  6. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  7. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    NASA Astrophysics Data System (ADS)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  8. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  9. DSA Image Blood Vessel Skeleton Extraction Based on Anti-concentration Diffusion and Level Set Method

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming

    Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.

  10. Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods

    NASA Astrophysics Data System (ADS)

    Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.

    2008-12-01

    Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.

  11. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  12. Positive-unlabeled learning for disease gene identification

    PubMed Central

    Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong

    2012-01-01

    Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290

  13. Micropores and methods of making and using thereof

    DOEpatents

    Perroud, Thomas D.; Patel, Kamlesh D.; Meagher, Robert J.

    2016-08-02

    Disclosed herein are methods of making micropores of a desired height and/or width between two isotropic wet etched features in a substrate which comprises single-level isotropic wet etching the two features using an etchant and a mask distance that is less than 2.times. a set etch depth. Also disclosed herein are methods using the micropores and microfluidic devices comprising the micropores.

  14. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  15. Changes in Mobility of Children with Cerebral Palsy over Time and across Environmental Settings

    ERIC Educational Resources Information Center

    Tieman, Beth L.; Palisano, Robert J.; Gracely, Edward J.; Rosenbaum, Peter L.; Chiarello, Lisa A.; O'Neil, Margaret E.

    2004-01-01

    This study examined changes in mobility methods of children with cerebral palsy (CP) over time and across environmental settings. Sixty-two children with CP, ages 6-14 years and classified as levels II-IV on the Gross Motor Function Classification System, were randomly selected from a larger data base and followed for three to four years. On each…

  16. Using lot quality assurance sampling to assess access to water, sanitation and hygiene services in a refugee camp setting in South Sudan: a feasibility study.

    PubMed

    Harding, Elizabeth; Beckworth, Colin; Fesselet, Jean-Francois; Lenglet, Annick; Lako, Richard; Valadez, Joseph J

    2017-08-08

    Humanitarian agencies working in refugee camp settings require rapid assessment methods to measure the needs of the populations they serve. Due to the high level of dependency of refugees, agencies need to carry out these assessments. Lot Quality Assurance Sampling (LQAS) is a method commonly used in development settings to assess populations living in a project catchment area to identify their greatest needs. LQAS could be well suited to serve the needs of refugee populations, but it has rarely been used in humanitarian settings. We adapted and implemented an LQAS survey design in Batil refugee camp, South Sudan in May 2013 to measure the added value of using it for sub-camp level assessment. Using pre-existing divisions within the camp, we divided the Batil catchment area into six contiguous segments, called 'supervision areas' (SA). Six teams of two data collectors randomly selected 19 respondents in each SA, who they interviewed to collect information on water, sanitation, hygiene, and diarrhoea prevalence. These findings were aggregated into a stratified random sample of 114 respondents, and the results were analysed to produce a coverage estimate with 95% confidence interval for the camp and to prioritize SAs within the camp. The survey provided coverage estimates on WASH indicators as well as evidence that areas of the camp closer to the main road, to clinics and to the market were better served than areas at the periphery of the camp. This assumption did not hold for all services, however, as sanitation services were uniformly high regardless of location. While it was necessary to adapt the standard LQAS protocol used in low-resource communities, the LQAS model proved to be feasible in a refugee camp setting, and program managers found the results useful at both the catchment area and SA level. This study, one of the few adaptations of LQAS for a camp setting, shows that it is a feasible method for regular monitoring, with the added value of enabling camp managers to identify and advocate for the least served areas within the camp. Feedback on the results from stakeholders was overwhelmingly positive.

  17. Infusion volume control and calculation using metronome and drop counter based intravenous infusion therapy helper.

    PubMed

    Park, Kyungnam; Lee, Jangyoung; Kim, Soo-Young; Kim, Jinwoo; Kim, Insoo; Choi, Seung Pill; Jeong, Sikyung; Hong, Sungyoup

    2013-06-01

    This study assessed the method of fluid infusion control using an IntraVenous Infusion Controller (IVIC). Four methods of infusion control (dial flow controller, IV set without correction, IV set with correction and IVIC correction) were used to measure the volume of each technique at two infusion rates. The infused fluid volume with a dial flow controller was significantly larger than other methods. The infused fluid volume was significantly smaller with an IV set without correction over time. Regarding the concordance correlation coefficient (CCC) of infused fluid volume in relation to a target volume, IVIC correction was shown to have the highest level of agreement. The flow rate measured in check mode showed a good agreement with the volume of collected fluid after passing through the IV system. Thus, an IVIC could assist in providing an accurate infusion control. © 2013 Wiley Publishing Asia Pty Ltd.

  18. Risk assessment of water pollution sources based on an integrated k-means clustering and set pair analysis method in the region of Shiyan, China.

    PubMed

    Li, Chunhui; Sun, Lian; Jia, Junxiang; Cai, Yanpeng; Wang, Xuan

    2016-07-01

    Source water areas are facing many potential water pollution risks. Risk assessment is an effective method to evaluate such risks. In this paper an integrated model based on k-means clustering analysis and set pair analysis was established aiming at evaluating the risks associated with water pollution in source water areas, in which the weights of indicators were determined through the entropy weight method. Then the proposed model was applied to assess water pollution risks in the region of Shiyan in which China's key source water area Danjiangkou Reservoir for the water source of the middle route of South-to-North Water Diversion Project is located. The results showed that eleven sources with relative high risk value were identified. At the regional scale, Shiyan City and Danjiangkou City would have a high risk value in term of the industrial discharge. Comparatively, Danjiangkou City and Yunxian County would have a high risk value in terms of agricultural pollution. Overall, the risk values of north regions close to the main stream and reservoir of the region of Shiyan were higher than that in the south. The results of risk level indicated that five sources were in lower risk level (i.e., level II), two in moderate risk level (i.e., level III), one in higher risk level (i.e., level IV) and three in highest risk level (i.e., level V). Also risks of industrial discharge are higher than that of the agricultural sector. It is thus essential to manage the pillar industry of the region of Shiyan and certain agricultural companies in the vicinity of the reservoir to reduce water pollution risks of source water areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  20. Incorporating current research into formal higher education settings using Astrobites

    NASA Astrophysics Data System (ADS)

    Sanders, Nathan E.; Kohler, Susanna; Faesi, Chris; Villar, Ashley; Zevin, Michael

    2017-10-01

    A primary goal of many undergraduate- and graduate-level courses in the physical sciences is to prepare students to engage in scientific research or to prepare students for careers that leverage skillsets similar to those used by research scientists. Even for students who may not intend to pursue a career with these characteristics, exposure to the context of applications in modern research can be a valuable tool for teaching and learning. However, a persistent barrier to student participation in research is familiarity with the technical language, format, and context that academic researchers use to communicate research methods and findings with each other: the literature of the field. Astrobites, an online web resource authored by graduate students, has published brief and accessible summaries of more than 1300 articles from the astrophysical literature since its founding in 2010. This article presents three methods for introducing students at all levels within the formal higher education setting to approaches and results from modern research. For each method, we provide a sample lesson plan that integrates content and principles from Astrobites, including step-by-step instructions for instructors, suggestions for adapting the lesson to different class levels across the undergraduate and graduate spectrum, sample student handouts, and a grading rubric.

  1. Computer-aided detection of bladder wall thickening in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon Z.; Gordon, Marshall N.; Samala, Ravi K.

    2018-02-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). Bladder wall thickening is a manifestation of bladder cancer and its detection is more challenging than the detection of bladder masses. We first segmented the inner and outer bladder walls using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential lesions. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify regions of wall thickening candidates. Volume-based features of the wall thickening candidates were analyzed with linear discriminant analysis (LDA) to differentiate bladder wall thickenings from false positives. A data set of 112 patients, 87 with wall thickening and 25 with normal bladders, was collected retrospectively with IRB approval, and split into independent training and test sets. Of the 57 training cases, 44 had bladder wall thickening and 13 were normal. Of the 55 test cases, 43 had wall thickening and 12 were normal. The LDA classifier was trained with the training set and evaluated with the test set. FROC analysis showed that the system achieved sensitivities of 93.2% and 88.4% for the training and test sets, respectively, at 0.5 FPs/case.

  2. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  3. Computer-aided diagnostic method for classification of Alzheimer's disease with atrophic image features on MR images

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Tanaka, Kazuhiro; Koga, Hiroshi; Mihara, Futoshi; Honda, Hiroshi; Sakai, Shuji; Toyofuku, Fukai; Higashida, Yoshiharu

    2008-03-01

    Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.

  4. Improving record linkage performance in the presence of missing linkage data.

    PubMed

    Ong, Toan C; Mannino, Michael V; Schilling, Lisa M; Kahn, Michael G

    2014-12-01

    Existing record linkage methods do not handle missing linking field values in an efficient and effective manner. The objective of this study is to investigate three novel methods for improving the accuracy and efficiency of record linkage when record linkage fields have missing values. By extending the Fellegi-Sunter scoring implementations available in the open-source Fine-grained Record Linkage (FRIL) software system we developed three novel methods to solve the missing data problem in record linkage, which we refer to as: Weight Redistribution, Distance Imputation, and Linkage Expansion. Weight Redistribution removes fields with missing data from the set of quasi-identifiers and redistributes the weight from the missing attribute based on relative proportions across the remaining available linkage fields. Distance Imputation imputes the distance between the missing data fields rather than imputing the missing data value. Linkage Expansion adds previously considered non-linkage fields to the linkage field set to compensate for the missing information in a linkage field. We tested the linkage methods using simulated data sets with varying field value corruption rates. The methods developed had sensitivity ranging from .895 to .992 and positive predictive values (PPV) ranging from .865 to 1 in data sets with low corruption rates. Increased corruption rates lead to decreased sensitivity for all methods. These new record linkage algorithms show promise in terms of accuracy and efficiency and may be valuable for combining large data sets at the patient level to support biomedical and clinical research. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  6. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  7. Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.

    PubMed

    Koutny, Tomas

    2016-09-01

    We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images

    PubMed Central

    Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.

    2016-01-01

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012

  9. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images.

    PubMed

    Hoffman, R A; Kothari, S; Phan, J H; Wang, M D

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.

  10. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  11. Performance of Frozen Density Embedding for Modeling Hole Transfer Reactions.

    PubMed

    Ramos, Pablo; Papadakis, Markos; Pavanello, Michele

    2015-06-18

    We have carried out a thorough benchmark of the frozen density-embedding (FDE) method for calculating hole transfer couplings. We have considered 10 exchange-correlation functionals, 3 nonadditive kinetic energy functionals, and 3 basis sets. Overall, we conclude that with a 7% mean relative unsigned error, the PBE and PW91 functionals coupled with the PW91k nonadditive kinetic energy functional and a TZP basis set constitute the most stable and accurate levels of theory for hole transfer coupling calculations. The FDE-ET method is found to be an excellent tool for computing diabatic couplings for hole transfer reactions.

  12. Mental Health Practitioners' Perceived Levels of Preparedness, Levels of Confidence and Methods Used in the Assessment of Youth Suicide Risk

    ERIC Educational Resources Information Center

    Schmidt, Robert C.

    2016-01-01

    Mental health practitioners working within school or community settings may at any time find themselves working with youth presenting with suicidal thoughts or behaviors. Although always well intended, practitioners are making significant clinical decisions that have high potential for influencing a range of outcomes, including very negative…

  13. The Use of Blended Learning to Facilitate Critical Thinking in Entry Level Occupational Therapy Students

    ERIC Educational Resources Information Center

    Rodriguez, Eva L.

    2009-01-01

    The popularity of using online instruction (both in blended and complete distance learning) in higher education settings is increasing (Appana, 2008; Newton, 2006; Oh, 2006). Occupational therapy educators are using blended learning methods under the assumption that this learning platform will facilitate in their students the required level of…

  14. Comparing and refining karst disturbance index methods through application in an island karst setting

    NASA Astrophysics Data System (ADS)

    Porter, Brandon L.; North, Leslie A.; Polk, Jason S.

    2016-12-01

    The interconnected nature of surface and subsurface karst environments allows easy disturbance to their aquifers and specialized ecosystems from anthropogenic impacts. The karst disturbance index is a holistic tool used to measure disturbance to karst environments and has been applied and refined through studies in Florida and Italy, among others. Through these applications, the karst disturbance index has evolved into two commonly used methods of application; yet, the karst disturbance index is still susceptible to evaluation and modification for application in other areas around the world. The geographically isolated and highly vulnerable municipality of Arecibo, Puerto Rico's karst area provides an opportunity to test the usefulness and validity of the karst disturbance index in an island setting and to compare and further refine the application of the original and modified methods. This study found the both methods of karst disturbance index application resulted in high disturbance scores (Original Method 0.54 and Modified Method 0.69, respectively) and uncovered multiple considerations for the improvement of the karst disturbance index. An evaluation of multiple methods together in an island setting also resulted in the need for adding additional indicators, including Mogote Removal and Coastal Karst. Collectively, the results provide a holistic approach to using the karst disturbance index in an island karst setting and suggest a modified method by which scaling and weighting may compensate for the difference between the original and modified method scores and allow interested stakeholders to evaluate disturbance regardless of his or her level of expertise.

  15. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-06-01

    A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE  >  20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.

  16. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    PubMed

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  17. Active browsing using similarity pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.

    1998-12-01

    In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.

  18. Integration of SAR and DEM data: Geometrical considerations

    NASA Technical Reports Server (NTRS)

    Kropatsch, Walter G.

    1991-01-01

    General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.

  19. Low cost estimation of the contribution of post-CCSD excitations to the total atomization energy using density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Sánchez, H. R.; Pis Diez, R.

    2016-04-01

    Based on the Aλ diagnostic for multireference effects recently proposed [U.R. Fogueri, S. Kozuch, A. Karton, J.M. Martin, Theor. Chem. Acc. 132 (2013) 1], a simple method for improving total atomization energies and reaction energies calculated at the CCSD level of theory is proposed. The method requires a CCSD calculation and two additional density functional theory calculations for the molecule. Two sets containing 139 and 51 molecules are used as training and validation sets, respectively, for total atomization energies. An appreciable decrease in the mean absolute error from 7-10 kcal mol-1 for CCSD to about 2 kcal mol-1 for the present method is observed. The present method provides atomization energies and reaction energies that compare favorably with relatively recent scaled CCSD methods.

  20. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  1. Computer-aided detection of bladder mass within non-contrast-enhanced region of CT Urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Zhou, Chuan

    2016-03-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced region of the bladder. In this study, we investigated methods for detection of bladder masses within the non-contrast enhanced region. The bladder was first segmented using a newly developed deep-learning convolutional neural network in combination with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensityprojection- based method. The non-contrast region was smoothed and a gray level threshold was employed to segment the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates as a prescreening step. The lesion candidates were segmented using our autoinitialized cascaded level set (AI-CALS) segmentation method, and 27 morphological features were extracted for each candidate. Stepwise feature selection with simplex optimization and leave-one-case-out resampling were used for training and validation of a false positive (FP) classifier. In each leave-one-case-out cycle, features were selected from the training cases and a linear discriminant analysis (LDA) classifier was designed to merge the selected features into a single score for classification of the left-out test case. A data set of 33 cases with 42 biopsy-proven lesions in the noncontrast enhanced region was collected. During prescreening, the system obtained 83.3% sensitivity at an average of 2.4 FPs/case. After feature extraction and FP reduction by LDA, the system achieved 81.0% sensitivity at 2.0 FPs/case, and 73.8% sensitivity at 1.5 FPs/case.

  2. A goal attainment pain management program for older adults with arthritis.

    PubMed

    Davis, Gail C; White, Terri L

    2008-12-01

    The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.

  3. New reporting procedures based on long-term method detection levels and some considerations for interpretations of water-quality data provided by the U.S. Geological Survey National Water Quality Laboratory

    USGS Publications Warehouse

    Childress, Carolyn J. Oblinger; Foreman, William T.; Connor, Brooke F.; Maloney, Thomas J.

    1999-01-01

    This report describes the U.S. Geological Survey National Water Quality Laboratory?s approach for determining long-term method detection levels and establishing reporting levels, details relevant new reporting conventions, and provides preliminary guidance on interpreting data reported with the new conventions. At the long-term method detection level concentration, the risk of a false positive detection (analyte reported present at the long-term method detection level when not in sample) is no more than 1 percent. However, at the long-term method detection level, the risk of a false negative occurrence (analyte reported not present when present at the long-term method detection level concentration) is up to 50 percent. Because this false negative rate is too high for use as a default 'less than' reporting level, a more reliable laboratory reporting level is set at twice the determined long-term method detection level. For all methods, concentrations measured between the laboratory reporting level and the long-term method detection level will be reported as estimated concentrations. Non-detections will be censored to the laboratory reporting level. Adoption of the new reporting conventions requires a full understanding of how low-concentration data can be used and interpreted and places responsibility for using and presenting final data with the user rather than with the laboratory. Users must consider that (1) new laboratory reporting levels may differ from previously established minimum reporting levels, (2) long-term method detection levels and laboratory reporting levels may change over time, and (3) estimated concentrations are less certain than concentrations reported above the laboratory reporting level. The availability of uncensored but qualified low-concentration data for interpretation and statistical analysis is a substantial benefit to the user. A decision to censor data after they are reported from the laboratory may still be made by the user, if merited, on the basis of the intended use of the data.

  4. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  5. Propellant Readiness Level: A Methodological Approach to Propellant Characterization

    NASA Technical Reports Server (NTRS)

    Bossard, John A.; Rhys, Noah O.

    2010-01-01

    A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.

  6. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG

    PubMed Central

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203

  7. A novel earth observation based ecological indicator for cyanobacterial blooms

    NASA Astrophysics Data System (ADS)

    Anttila, Saku; Fleming-Lehtinen, Vivi; Attila, Jenni; Junttila, Sofia; Alasalmi, Hanna; Hällfors, Heidi; Kervinen, Mikko; Koponen, Sampsa

    2018-02-01

    Cyanobacteria form spectacular mass occurrences almost annually in the Baltic Sea. These harmful algal blooms are the most visible consequences of marine eutrophication, driven by a surplus of nutrients from anthropogenic sources and internal processes of the ecosystem. We present a novel Cyanobacterial Bloom Indicator (CyaBI) targeted for the ecosystem assessment of eutrophication in marine areas. The method measures the current cyanobacterial bloom situation (an average condition of recent 5 years) and compares this to the estimated target level for 'good environmental status' (GES). The current status is derived with an index combining indicative bloom event variables. As such we used seasonal information from the duration, volume and severity of algal blooms derived from earth observation (EO) data. The target level for GES was set by using a remote sensing based data set named Fraction with Cyanobacterial Accumulations (FCA; Kahru & Elmgren, 2014) covering years 1979-2014. Here a shift-detection algorithm for time series was applied to detect time-periods in the FCA data where the level of blooms remained low several consecutive years. The average conditions from these time periods were transformed into respective CyaBI target values to represent target level for GES. The indicator is shown to pass the three critical factors set for marine indicator development, namely it measures the current status accurately, the target setting can be scientifically proven and it can be connected to the ecosystem management goal. An advantage of the CyaBI method is that it's not restricted to the data used in the development work, but can be complemented, or fully applied, by using different types of data sources providing information on cyanobacterial accumulations.

  8. An extended sequential goodness-of-fit multiple testing method for discrete data.

    PubMed

    Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo

    2017-10-01

    The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.

  9. An ab initio benchmark study of the H + CO --> HCO reaction

    NASA Technical Reports Server (NTRS)

    Woon, D. E.

    1996-01-01

    The H + CO --> HCO reaction has been characterized with correlation consistent basis sets at five levels of theory in order to benchmark the sensitivities of the barrier height and reaction ergicity to the one-electron and n-electron expansions of the electronic wave function. Single and multireference methods are compared and contrasted. The coupled cluster method RCCSD(T) was found to be in very good agreement with Davidson-corrected internally-contracted multireference configuration interaction (MRCI+Q). Second-order Moller-Plesset perturbation theory (MP2) was also employed. The estimated complete basis set (CBS) limits for the barrier height (in kcal/mol) for the five methods, including harmonic zero-point energy corrections, are MP2, 4.66; RCCSD, 4.78; RCCSD(T), 4.15; MRCI, 5.10; and MRCI+Q, 4.07. Similarly, the estimated CBS limits for the ergicity of the reaction are: MP2, -17.99; RCCSD, -13.34; RCCSD(T), -13.79; MRCI, -11.46; and MRCI+Q, -13.70. Additional basis set explorations for the RCCSD(T) method demonstrate that aug-cc-pVTZ sets, even with some functions removed, are sufficient to reproduce the CBS limits to within 0.1-0.3 kcal/mol.

  10. A fast, automated, polynomial-based cosmic ray spike-removal method for the high-throughput processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2013-04-01

    Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.

  11. HIV coreceptor phenotyping in the clinical setting.

    PubMed

    Low, Andrew J; Swenson, Luke C; Harrigan, P Richard

    2008-01-01

    The introduction of CCR5 antagonists increases the options available for constructing antiretroviral regimens. However, this option is coupled with the caveat that patients should be tested for HIV coreceptor tropism prior to initiating CCR5 antagonist-based therapy. Failure to screen for CXCR4 usage increases the risk of using an ineffective drug, thus reducing the likelihood of viral suppression and increasing their risk for developing antiretroviral resistance. This review discusses current and future methods of determining HIV tropism, with a focus on their utility in the clinical setting for screening purposes. Some of these methods include recombinant phenotypic tests, such as the Monogram Trofile assay, as well as genotype-based predictors, heteroduplex tracking assays, and flow cytometry based methods. Currently, the best evidence supports the use of phenotypic methods, although other methods of screening for HIV coreceptor usage prior to the administration of CCR5 antagonists may reduce costs and increase turnaround time over phenotypic methods. The presence of low levels of X4 virus is a challenge to all assay methods, resulting in reduced sensitivity in clinical, patient-derived samples when compared to clonally derived samples. Gaining a better understanding of the output of these assays and correlating them with clinical progression and therapy response will provide some indication on how both genotype-based, and phenotypic assays for determining HIV coreceptor usage can be improved. In addition, leveraging new technologies capable of detecting low-level minority species may provide the most significant advances in ensuring that individuals with low levels of dual/mixed tropic virus are not inadvertently prescribed CCR5 antagonists.

  12. Using the gene ontology to scan multilevel gene sets for associations in genome wide association studies.

    PubMed

    Schaid, Daniel J; Sinnwell, Jason P; Jenkins, Gregory D; McDonnell, Shannon K; Ingle, James N; Kubo, Michiaki; Goss, Paul E; Costantino, Joseph P; Wickerham, D Lawrence; Weinshilboum, Richard M

    2012-01-01

    Gene-set analyses have been widely used in gene expression studies, and some of the developed methods have been extended to genome wide association studies (GWAS). Yet, complications due to linkage disequilibrium (LD) among single nucleotide polymorphisms (SNPs), and variable numbers of SNPs per gene and genes per gene-set, have plagued current approaches, often leading to ad hoc "fixes." To overcome some of the current limitations, we developed a general approach to scan GWAS SNP data for both gene-level and gene-set analyses, building on score statistics for generalized linear models, and taking advantage of the directed acyclic graph structure of the gene ontology when creating gene-sets. However, other types of gene-set structures can be used, such as the popular Kyoto Encyclopedia of Genes and Genomes (KEGG). Our approach combines SNPs into genes, and genes into gene-sets, but assures that positive and negative effects of genes on a trait do not cancel. To control for multiple testing of many gene-sets, we use an efficient computational strategy that accounts for LD and provides accurate step-down adjusted P-values for each gene-set. Application of our methods to two different GWAS provide guidance on the potential strengths and weaknesses of our proposed gene-set analyses. © 2011 Wiley Periodicals, Inc.

  13. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  14. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.

  15. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  16. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  17. Application of gray level mapping in computed tomographic colonography: a pilot study to compare with traditional surface rendering method for identification and differentiation of endoluminal lesions

    PubMed Central

    Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju

    2017-01-01

    Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483

  18. Attitudes toward Using Social Networking Sites in Educational Settings with Underperforming Latino Youth: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Howard, Keith E.; Curwen, Margie Sauceda; Howard, Nicol R.; Colón-Muñiz, Anaida

    2015-01-01

    The researchers examined the online social networking attitudes of underperforming Latino high school students in an alternative education program that uses technology as the prime venue for learning. A sequential explanatory mixed methods study was used to cross-check multiple sources of data explaining students' levels of comfort with utilizing…

  19. A Comparison of the Incidence of Cricothyrotomy in the Deployed Setting to the Emergency Department at a Level 1 Military Trauma Center: A Descriptive Analysis

    DTIC Science & Technology

    2015-03-01

    the providers in the deployed setting and include the Tactical Combat Casualty Care casualty card. Data are then coded for query and analysis. All...intubate, can’t ventilate” and disruption of head/neck anatomy. Of the four procedures performed in the ED setting, three patients survived to hospital...data from SAMMC are limited by the search methods and data extraction. We searched by Current Procedural Ter- minology code , which requires that the

  20. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods

    NASA Astrophysics Data System (ADS)

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-01

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  1. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods.

    PubMed

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-14

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  2. Healthy Settings in Hospital - How to Prevent Burnout Syndrome in Nurses: Literature Review.

    PubMed

    Friganović, Adriano; Kovačević, Irena; Ilić, Boris; Žulec, Mirna; Krikšić, Valentina; Grgas Bile, Cecilija

    2017-06-01

    Healthy settings involve a holistic and multidisciplinary method that integrates actions towards risk factors. In hospital settings, a high level of stress can lead to depression, anxiety, decreased job satisfaction and lower loyalty to the organization. Burnout syndrome can be defined as physical, psychological and emotional exhaustion, depersonalization, and low sense of personal accomplishment. The aim of this literature review was to make systematic literature analysis to provide scientific evidence for the consequences of constant exposure to high levels of stress and for the methods to be used to prevent burnout syndrome among health care workers. The Medline database was searched to identify relevant studies and articles published during the last 15 years. The key words used in this survey were burnout syndrome, prevention, nurses, and healthy settings. The 6 eligible studies were included in literature review. Evidence showed nurses to be exposed to stress and to have symptoms of burnout syndrome. As a result of burnout syndrome, chronic fatigue and reduced working capacity occur, thus raising the risk of adverse events. In conclusion, the occurrence of burnout syndrome is a major problem for hospitals and healthcare system. Action plan for hospital burnout syndrome prevention would greatly reduce the incidence and improve the quality of health care.

  3. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  4. Improving automatic peptide mass fingerprint protein identification by combining many peak sets.

    PubMed

    Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim

    2004-08-05

    An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.

  5. Determining the mean hydraulic gradient of ground water affected by tidal fluctuations

    USGS Publications Warehouse

    Serfes, Michael E.

    1991-01-01

    Tidal fluctuations in surface-water bodies produce progressive pressure waves in adjacent aquifers. As these pressure waves propagate inland, ground-water levels and hydraulic gradients continuously fluctuate, creating a situation where a single set of water-level measurements cannot be used to accurately characterize ground-water flow. For example, a time series of water levels measured in a confined aquifer in Atlantic City, New Jersey, showed that the hydraulic gradient ranged from .01 to .001 with a 22-degree change in direction during a tidal day of approximately 25 hours. At any point where ground water tidally fluctuates, the magnitude and direction of the hydraulic gradient fluctuates about the mean or regional hydraulic gradient. The net effect of these fluctuations on ground-water flow can be determined using the mean hydraulic gradient, which can be calculated by comparing mean ground- and surface-water elevations. Filtering methods traditionally used to determine daily mean sea level can be similarly applied to ground water to determine mean levels. Method (1) uses 71 consecutive hourly water-level observations to accurately determine the mean level. Method (2) approximates the mean level using only 25 consecutive hourly observations; however, there is a small error associated with this method.

  6. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  7. Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods

    NASA Astrophysics Data System (ADS)

    Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.

    2017-09-01

    Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.

  8. Detecting concerted demographic response across community assemblages using hierarchical approximate Bayesian computation.

    PubMed

    Chan, Yvonne L; Schanzenbach, David; Hickerson, Michael J

    2014-09-01

    Methods that integrate population-level sampling from multiple taxa into a single community-level analysis are an essential addition to the comparative phylogeographic toolkit. Detecting how species within communities have demographically tracked each other in space and time is important for understanding the effects of future climate and landscape changes and the resulting acceleration of extinctions, biological invasions, and potential surges in adaptive evolution. Here, we present a statistical framework for such an analysis based on hierarchical approximate Bayesian computation (hABC) with the goal of detecting concerted demographic histories across an ecological assemblage. Our method combines population genetic data sets from multiple taxa into a single analysis to estimate: 1) the proportion of a community sample that demographically expanded in a temporally clustered pulse and 2) when the pulse occurred. To validate the accuracy and utility of this new approach, we use simulation cross-validation experiments and subsequently analyze an empirical data set of 32 avian populations from Australia that are hypothesized to have expanded from smaller refugia populations in the late Pleistocene. The method can accommodate data set heterogeneity such as variability in effective population size, mutation rates, and sample sizes across species and exploits the statistical strength from the simultaneous analysis of multiple species. This hABC framework used in a multitaxa demographic context can increase our understanding of the impact of historical climate change by determining what proportion of the community responded in concert or independently and can be used with a wide variety of comparative phylogeographic data sets as biota-wide DNA barcoding data sets accumulate. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.

    PubMed

    Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav

    2009-05-01

    The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.

  10. The diagnosis of luteal phase deficiency: a critical review.

    PubMed

    McNeely, M J; Soules, M R

    1988-07-01

    Luteal phase deficiency is an ovulatory dysfunction problem that is subtle but real. It may be the most common ovulatory problem in women. Luteal phase deficiency has been clearly demonstrated in the research setting (1) in spontaneous cycles, (2) when follicular maturation has been impeded, and (3) when luteotrophic influences have been suppressed. The diagnosis of LPD in the clinical setting remains problematic and controversial primarily because there is no practical diagnostic method that has been validated. This article has reviewed the methods that have been used to diagnose LPD. BBT charts are insensitive; these charts reliably diagnose LPD only when there are persistent short luteal phases. There is disagreement whether ovarian follicular size, as determined by ultrasonography, is decreased in LPD; however, ultrasonographic diagnosis of LPD would require daily scans through ovulation, which makes this approach impractical. Mild hyperprolactinemia is a probable cause of LPD in a minority of patients; a physician should obtain a PRL level in LPD women with the realization that there is considerable sampling variability. Determination of serum gonadotropin levels (LH or FSH or both) is not practical for the clinical diagnosis of LPD. Random serum P levels, whether single or multiple, are not helpful in the diagnosis of LPD in individual patients. The secretory pattern of P results in such wide confidence limits that P samples from individuals cannot be compared to normal in a useful manner. Most of the controversy about the diagnosis of LPD has centered around the use of individual serum P levels. The timed endometrial biopsy relies on the endometrium as a bioassay of P over time. The endometrial biopsy has not been carefully validated in terms of its sensitivity or accuracy for the diagnosis of LPD. However, it remains the best current method for the diagnosis of LPD when the standard guidelines for its use are followed. As opposed to the other tests for LPD, awareness of the usefulness of the biopsy has increased as we have learned more about CL physiology. No current research method for the diagnosis of LPD appears to be a practical method that could be applied in the clinical setting. Specific secretory proteins from the endometrium and methods to measure hormone secretion that circumvent the secretory pattern hold promise for improved methods to diagnose LPD in the future.

  11. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  12. A Consistent Set of Oxidation Number Rules for Intelligent Computer Tutoring

    NASA Astrophysics Data System (ADS)

    Holder, Dale A.; Johnson, Benny G.; Karol, Paul J.

    2002-04-01

    We have developed a method for assigning oxidation numbers that eliminates the inconsistencies and ambiguities found in most conventional textbook rules, yet remains simple enough for beginning students to use. It involves imposition of a two-level hierarchy on a set of rules similar to those already being taught. We recommend emphasizing that the oxidation number method is an approximate model and cannot always be successfully applied. This proper perspective will lead students to apply the rules more carefully in all problems. Whenever failure does occur, it will indicate the limitations of the oxidation number concept itself, rather than merely the failure of a poorly constructed set of rules. We have used these improved rules as the basis for an intelligent tutoring program on oxidation numbers.

  13. The virtual people set: developing computer-generated stimuli for the assessment of pedophilic sexual interest.

    PubMed

    Dombert, Beate; Mokros, Andreas; Brückner, Eva; Schlegl, Verena; Antfolk, Jan; Bäckström, Anna; Zappalà, Angelo; Osterheider, Michael; Santtila, Pekka

    2013-12-01

    The implicit assessment of pedophilic sexual interest through viewing-time methods necessitates visual stimuli. There are grave ethical and legal concerns against using pictures of real children, however. The present report is a summary of findings on a new set of 108 computer-generated stimuli. The images vary in terms of gender (female/male), explicitness (naked/clothed), and physical maturity (prepubescent, pubescent, and adult) of the persons depicted. A series of three studies tested the internal and external validity of the picture set. Studies 1 and 2 yielded good-to-high estimates of observer agreement with regard to stimulus maturity levels by two methods (categorization and paired comparison). Study 3 extended these findings with regard to judgments made by convicted child sexual offenders.

  14. Increasing energy efficiency level of building production based on applying modern mechanization facilities

    NASA Astrophysics Data System (ADS)

    Prokhorov, Sergey

    2017-10-01

    Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.

  15. Decentralized control of large flexible structures by joint decoupling

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Juang, Jer-Nan

    1994-01-01

    This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.

  16. Method and system for assigning a confidence metric for automated determination of optic disc location

    DOEpatents

    Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

    2012-07-10

    A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

  17. Isolation with Migration Models for More Than Two Populations

    PubMed Central

    Hey, Jody

    2010-01-01

    A method for studying the divergence of multiple closely related populations is described and assessed. The approach of Hey and Nielsen (2007, Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics. Proc Natl Acad Sci USA. 104:2785–2790) for fitting an isolation-with-migration model was extended to the case of multiple populations with a known phylogeny. Analysis of simulated data sets reveals the kinds of history that are accessible with a multipopulation analysis. Necessarily, processes associated with older time periods in a phylogeny are more difficult to estimate; and histories with high levels of gene flow are particularly difficult with more than two populations. However, for histories with modest levels of gene flow, or for very large data sets, it is possible to study large complex divergence problems that involve multiple closely related populations or species. PMID:19955477

  18. Isolation with migration models for more than two populations.

    PubMed

    Hey, Jody

    2010-04-01

    A method for studying the divergence of multiple closely related populations is described and assessed. The approach of Hey and Nielsen (2007, Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics. Proc Natl Acad Sci USA. 104:2785-2790) for fitting an isolation-with-migration model was extended to the case of multiple populations with a known phylogeny. Analysis of simulated data sets reveals the kinds of history that are accessible with a multipopulation analysis. Necessarily, processes associated with older time periods in a phylogeny are more difficult to estimate; and histories with high levels of gene flow are particularly difficult with more than two populations. However, for histories with modest levels of gene flow, or for very large data sets, it is possible to study large complex divergence problems that involve multiple closely related populations or species.

  19. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  20. Energetics using the single point IMOMO (integrated molecular orbital+molecular orbital) calculations: Choices of computational levels and model system

    NASA Astrophysics Data System (ADS)

    Svensson, Mats; Humbel, Stéphane; Morokuma, Keiji

    1996-09-01

    The integrated MO+MO (IMOMO) method, recently proposed for geometry optimization, is tested for accurate single point calculations. The principle idea of the IMOMO method is to reproduce results of a high level MO calculation for a large ``real'' system by dividing it into a small ``model'' system and the rest and applying different levels of MO theory for the two parts. Test examples are the activation barrier of the SN2 reaction of Cl-+alkyl chlorides, the C=C double bond dissociation of olefins and the energy of reaction for epoxidation of benzene. The effects of basis set and method in the lower level calculation as well as the effects of the choice of model system are investigated in detail. The IMOMO method gives an approximation to the high level MO energetics on the real system, in most cases with very small errors, with a small additional cost over the low level calculation. For instance, when the MP2 (Møller-Plesset second-order perturbation) method is used as the lower level method, the IMOMO method reproduces the results of very high level MO method within 2 kcal/mol, with less than 50% of additional computer time, for the first two test examples. When the HF (Hartree-Fock) method is used as the lower level method, it is less accurate and depends more on the choice of model system, though the improvement over the HF energy is still very significant. Thus the IMOMO single point calculation provides a method for obtaining reliable local energetics such as bond energies and activation barriers for a large molecular system.

  1. Simulated families: A test for different methods of family identification

    NASA Technical Reports Server (NTRS)

    Bendjoya, Philippe; Cellino, Alberto; Froeschle, Claude; Zappala, Vincenzo

    1992-01-01

    A set of families generated in fictitious impact events (leading to a wide range of 'structure' in the orbital element space have been superimposed to various backgrounds of different densities in order to investigate the efficiency and the limitations of the methods used by Zappala et al. (1990) and by Bendjoya et al. (1990) for identifying asteroid families. In addition, an evaluation of the expected interlopers at different significance levels and the possibility of improving the definition of the level of maximum significant of a given family were analyzed.

  2. A Multi-Center Space Data System Prototype Based on CCSDS Standards

    NASA Technical Reports Server (NTRS)

    Rich, Thomas M.

    2016-01-01

    Deep space missions beyond earth orbit will require new methods of data communications in order to compensate for increasing RF propagation delay. The Consultative Committee for Space Data Systems (CCSDS) standard protocols Spacecraft Monitor & Control (SM&C), Asynchronous Message Service (AMS), and Delay/Disruption Tolerant Networking (DTN) provide such a method. The maturity level of this protocol set is, however, insufficient for mission inclusion at this time. This prototype is intended to provide experience which will raise the Technical Readiness Level (TRL) of these protocols..

  3. Assessment of Pharmacists' Perception of Patient Care Competence and Need for Training in Rural and Urban Areas in North Dakota

    ERIC Educational Resources Information Center

    Scott, David M.

    2010-01-01

    Context: Few studies have examined pharmacists' level of patient care competence and need for continuous professional development in rural areas. Purpose: To assess North Dakota pharmacists' practice setting, perceived level of patient care competencies, and the need for professional development in urban and rural areas. Methods: A survey was…

  4. Gas chromatography with pulsed flame photometric detection multiresidue method for organophosphate pesticide and metabolite residues at the parts-per-billion level in representatives commodities of fruits and vegetable crop groups.

    PubMed

    Podhorniak, L V; Negron, J F; Griffith, F D

    2001-01-01

    A gas chromatographic method with a pulsed flame photometric detector (P-FPD) is presented for the analysis of 28 parent organophosphate (OP) pesticides and their OP metabolites. A total of 57 organophosphates were analyzed in 10 representative fruit and vegetable crop groups. The method is based on a judicious selection of known procedures from FDA sources such as the Pesticide Analytical Manual and Laboratory Information Bulletins, combined in a manner to recover the OPs and their metabolite(s) at the part-per-billion (ppb) level. The method uses an acetone extraction with either miniaturized Hydromatrix column partitioning or alternately a miniaturized methylene dichloride liquid-liquid partitioning, followed by solid-phase extraction (SPE) cleanup with graphitized carbon black (GCB) and PSA cartridges. Determination of residues is by programmed temperature capillary column gas chromatography fitted with a P-FPD set in the phosphorus mode. The method is designed so that a set of samples can be prepared in 1 working day for overnight instrumental analysis. The recovery data indicates that a daily column-cutting procedure used in combination with the SPE extract cleanup effectively reduces matrix enhancement at the ppb level for many organophosphates. The OPs most susceptible to elevated recoveries around or greater than 150%, based on peak area calculations, were trichlorfon, phosmet, and the metabolites of dimethoate, fenamiphos, fenthion, and phorate.

  5. Comparison of SVM, RF and ELM on an Electronic Nose for the Intelligent Evaluation of Paraffin Samples.

    PubMed

    Men, Hong; Fu, Songlin; Yang, Jialin; Cheng, Meiqi; Shi, Yan; Liu, Jingjing

    2018-01-18

    Paraffin odor intensity is an important quality indicator when a paraffin inspection is performed. Currently, paraffin odor level assessment is mainly dependent on an artificial sensory evaluation. In this paper, we developed a paraffin odor analysis system to classify and grade four kinds of paraffin samples. The original feature set was optimized using Principal Component Analysis (PCA) and Partial Least Squares (PLS). Support Vector Machine (SVM), Random Forest (RF), and Extreme Learning Machine (ELM) were applied to three different feature data sets for classification and level assessment of paraffin. For classification, the model based on SVM, with an accuracy rate of 100%, was superior to that based on RF, with an accuracy rate of 98.33-100%, and ELM, with an accuracy rate of 98.01-100%. For level assessment, the R² related to the training set was above 0.97 and the R² related to the test set was above 0.87. Through comprehensive comparison, the generalization of the model based on ELM was superior to those based on SVM and RF. The scoring errors for the three models were 0.0016-0.3494, lower than the error of 0.5-1.0 measured by industry standard experts, meaning these methods have a higher prediction accuracy for scoring paraffin level.

  6. Anharmonic Vibrational Spectroscopy of the F-(H20)n, complexes, n=1,2

    NASA Technical Reports Server (NTRS)

    Chaban, Galina M.; Xantheas, Sotiris; Gerber, R. Benny; Kwak, Dochan (Technical Monitor)

    2003-01-01

    We report anharmonic vibrational spectra (fundamentals, first overtones) for the F-(H(sub 2)O) and F-(H(sub 2)O)2 clusters computed at the MP2 and CCSD(T) levels of theory with basis sets of triple zeta quality. Anharmonic corrections were estimated via the correlation-corrected vibrational self-consistent field (CC-VSCF) method. The CC-VSCF anharmonic spectra obtained on the potential energy surfaces evaluated at the CCSD(T) level of theory are the first ones reported at a correlated level beyond MP2. We have found that the average basis set effect (TZP vs. aug-cc-pVTZ) is on the order of 30-40 cm(exp -1), whereas the effects of different levels of electron correlation [MP2 vs. CCSD(T)] are smaller, 20-30 cm(exp -1). However, the basis set effect is much larger in the case of the H-bonded O-H stretch of the F-(H(sub 2)O) cluster amounting to 100 cm(exp -1) for the fundamentals and 200 cm (exp -1) for the first overtones. Our calculations are in agreement with the limited available set of experimental data for the F-(H(sub 2)O) and F-(H(sub 2)O)2 systems and provide additional information that can guide further experimental studies.

  7. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  8. Wavelet energy-guided level set-based active contour: a segmentation method to segment highly similar regions.

    PubMed

    Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi

    2010-07-01

    This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease

    PubMed Central

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.

    2016-01-01

    Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708

  10. How Are Health Research Priorities Set in Low and Middle Income Countries? A Systematic Review of Published Reports

    PubMed Central

    McGregor, Skye; Henderson, Klara J.; Kaldor, John M.

    2014-01-01

    Background Priority setting is increasingly recognised as essential for directing finite resources to support research that maximizes public health benefits and drives health equity. Priority setting processes have been undertaken in a number of low- and middle-income country (LMIC) settings, using a variety of methods. We undertook a critical review of reports of these processes. Methods and Findings We searched electronic databases and online for peer reviewed and non-peer reviewed literature. We found 91 initiatives that met inclusion criteria. The majority took place at the global level (46%). For regional or national initiatives, most focused on Sub Saharan Africa (49%), followed by East Asia and Pacific (20%) and Latin America and the Caribbean (18%). A quarter of initiatives aimed to cover all areas of health research, with a further 20% covering communicable diseases. The most frequently used process was a conference or workshop to determine priorities (24%), followed by the Child Health and Nutrition Initiative (CHNRI) method (18%). The majority were initiated by an international organization or collaboration (46%). Researchers and government were the most frequently represented stakeholders. There was limited evidence of any implementation or follow-up strategies. Challenges in priority setting included engagement with stakeholders, data availability, and capacity constraints. Conclusions Health research priority setting (HRPS) has been undertaken in a variety of LMIC settings. While not consistently used, the application of established methods provides a means of identifying health research priorities in a repeatable and transparent manner. In the absence of published information on implementation or evaluation, it is not possible to assess what the impact and effectiveness of health research priority setting may have been. PMID:25275315

  11. Non-invasive prediction of hemoglobin levels by principal component and back propagation artificial neural network

    PubMed Central

    Ding, Haiquan; Lu, Qipeng; Gao, Hongzhi; Peng, Zhongqi

    2014-01-01

    To facilitate non-invasive diagnosis of anemia, specific equipment was developed, and non-invasive hemoglobin (HB) detection method based on back propagation artificial neural network (BP-ANN) was studied. In this paper, we combined a broadband light source composed of 9 LEDs with grating spectrograph and Si photodiode array, and then developed a high-performance spectrophotometric system. By using this equipment, fingertip spectra of 109 volunteers were measured. In order to deduct the interference of redundant data, principal component analysis (PCA) was applied to reduce the dimensionality of collected spectra. Then the principal components of the spectra were taken as input of BP-ANN model. On this basis we obtained the optimal network structure, in which node numbers of input layer, hidden layer, and output layer was 9, 11, and 1. Calibration and correction sample sets were used for analyzing the accuracy of non-invasive hemoglobin measurement, and prediction sample set was used for testing the adaptability of the model. The correlation coefficient of network model established by this method is 0.94, standard error of calibration, correction, and prediction are 11.29g/L, 11.47g/L, and 11.01g/L respectively. The result proves that there exist good correlations between spectra of three sample sets and actual hemoglobin level, and the model has a good robustness. It is indicated that the developed spectrophotometric system has potential for the non-invasive detection of HB levels with the method of BP-ANN combined with PCA. PMID:24761296

  12. Identifying Aquifer Heterogeneities using the Level Set Method

    NASA Astrophysics Data System (ADS)

    Lu, Z.; Vesselinov, V. V.; Lei, H.

    2016-12-01

    Material interfaces between hydrostatigraphic units (HSU) with contrasting aquifer parameters (e.g., strata and facies with different hydraulic conductivity) have a great impact on flow and contaminant transport in subsurface. However, the identification of HSU shape in the subsurface is challenging and typically relies on tomographic approaches where a series of steady-state/transient head measurements at spatially distributed observation locations are analyzed using inverse models. In this study, we developed a mathematically rigorous approach for identifying material interfaces among any arbitrary number of HSUs using the level set method. The approach has been tested first with several synthetic cases, where the true spatial distribution of HSUs was assumed to be known and the head measurements were taken from the flow simulation with the true parameter fields. These synthetic inversion examples demonstrate that the level set method is capable of characterizing the spatial distribution of the heterogeneous. We then applied the methodology to a large-scale problem in which the spatial distribution of pumping wells and observation well screens is consistent with the actual aquifer contamination (chromium) site at the Los Alamos National Laboratory (LANL). In this way, we test the applicability of the methodology at an actual site. We also present preliminary results using the actual LANL site data. We also investigated the impact of the number of pumping/observation wells and the drawdown observation frequencies/intervals on the quality of the inversion results. We also examined the uncertainties associated with the estimated HSU shapes, and the accuracy of the results under different hydraulic-conductivity contrasts between the HSU's.

  13. The Impact of the Tree Prior on Molecular Dating of Data Sets Containing a Mixture of Inter- and Intraspecies Sampling.

    PubMed

    Ritchie, Andrew M; Lo, Nathan; Ho, Simon Y W

    2017-05-01

    In Bayesian phylogenetic analyses of genetic data, prior probability distributions need to be specified for the model parameters, including the tree. When Bayesian methods are used for molecular dating, available tree priors include those designed for species-level data, such as the pure-birth and birth-death priors, and coalescent-based priors designed for population-level data. However, molecular dating methods are frequently applied to data sets that include multiple individuals across multiple species. Such data sets violate the assumptions of both the speciation and coalescent-based tree priors, making it unclear which should be chosen and whether this choice can affect the estimation of node times. To investigate this problem, we used a simulation approach to produce data sets with different proportions of within- and between-species sampling under the multispecies coalescent model. These data sets were then analyzed under pure-birth, birth-death, constant-size coalescent, and skyline coalescent tree priors. We also explored the ability of Bayesian model testing to select the best-performing priors. We confirmed the applicability of our results to empirical data sets from cetaceans, phocids, and coregonid whitefish. Estimates of node times were generally robust to the choice of tree prior, but some combinations of tree priors and sampling schemes led to large differences in the age estimates. In particular, the pure-birth tree prior frequently led to inaccurate estimates for data sets containing a mixture of inter- and intraspecific sampling, whereas the birth-death and skyline coalescent priors produced stable results across all scenarios. Model testing provided an adequate means of rejecting inappropriate tree priors. Our results suggest that tree priors do not strongly affect Bayesian molecular dating results in most cases, even when severely misspecified. However, the choice of tree prior can be significant for the accuracy of dating results in the case of data sets with mixed inter- and intraspecies sampling. [Bayesian phylogenetic methods; model testing; molecular dating; node time; tree prior.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.

  14. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.

  15. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  16. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  17. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol

    PubMed Central

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O’Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-01-01

    Introduction Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Methods and analysis Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. Ethics and dissemination The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. Trial registration number ISRCTN90752212. PMID:29550781

  18. Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of the CHNRI Method

    PubMed Central

    Rudan, Igor; Gibson, Jennifer L.; Ameratunga, Shanthi; El Arifeen, Shams; Bhutta, Zulfiqar A.; Black, Maureen; Black, Robert E.; Brown, Kenneth H.; Campbell, Harry; Carneiro, Ilona; Chan, Kit Yee; Chandramohan, Daniel; Chopra, Mickey; Cousens, Simon; Darmstadt, Gary L.; Gardner, Julie Meeks; Hess, Sonja Y.; Hyder, Adnan A.; Kapiriri, Lydia; Kosek, Margaret; Lanata, Claudio F.; Lansang, Mary Ann; Lawn, Joy; Tomlinson, Mark; Tsai, Alexander C.; Webster, Jayne

    2008-01-01

    This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global. PMID:19090596

  19. Priority setting and implementation in a centralized health system: a case study of Kerman province in Iran.

    PubMed

    Khayatzadeh-Mahani, Akram; Fotaki, Marianna; Harvey, Gillian

    2013-08-01

    The question of how priority setting processes work remains topical, contentious and political in every health system across the globe. It is particularly acute in the context of developing countries because of the mismatch between needs and resources, which is often compounded by an underdeveloped capacity for decision making and weak institutional infrastructures. Yet there is limited research into how the process of setting and implementing health priorities works in developing countries. This study aims to address this gap by examining how a national priority setting programme works in the centralized health system of Iran and what factors influence its implementation at the meso and micro levels. We used a qualitative case study approach, incorporating mixed methods: in-depth interviews at three levels and a textual analysis of policy documents. The data analysis showed that the process of priority setting is non-systematic, there is little transparency as to how specific priorities are decided, and the decisions made are separated from their implementation. This is due to the highly centralized system, whereby health priorities are set at the macro level without involving meso or micro local levels or any representative of the public. Furthermore, the two main benefit packages are decided by different bodies (Ministry of Health and Medical Education and Ministry of Welfare and Social Security) and there is no co-ordination between them. The process is also heavily influenced by political pressure exerted by various groups, mostly medical professionals who attempt to control priority setting in accordance with their interests. Finally, there are many weaknesses in the implementation of priorities, resulting in a growing gap between rural and urban areas in terms of access to health services.

  20. Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

    PubMed Central

    Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.

    2012-01-01

    Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460

  1. [Use of hysteroscopy at the office in gynaecological practice].

    PubMed

    Török, Péter

    2014-10-05

    Nowadays minimally invasive techniques are a leading factors in medicine. According to this trend, hysteroscopy has been used in gynecology more and more frequently. Office hysteroscopy gives opportunity for a faster examination with less costs and strain for the patient. The aim of this work was to get familiar with the novel method. The author examined the level of pain during hysteroscopy performed for different indications with different types of instruments. In addition, the novel method invented for evaluating tubal patency was compared to the gold standard laparoscopy in 70 tubes. Office hysteroscopy was performed in 400 cases for indications according to the traditional method. All examinations were performed in University of Debrecen, Department of Obstetrics and Gynecology in an outpatient setting. A 2.7 mm diameter optic with diagnostic or operative sheet was used. Hysteroscopies were scheduled between the 4th and 11th cycle day. For recording pain level VAS was used in 70 cases. Comparison of hysteroscopic evaluation of tubal patency to the laparoscopic method was studies in 70 cases. It was found that office hysteroscopy can be performed in an outpatient setting, without anesthesia. Pain level showed no difference among subgroups (nulliparous, non-nulliparous, postmenopausal, diagnostic, operative) (mean±SD, 3.5±1.01; p=0.34). For the evaluation of tubal patency, office hysteroscopy showed 92.06% accuracy when compared to laparoscopy. Office hysteroscopy has several advantages over traditional method. This procedure is fast, it has less strain for the patient. The novel method, rather than traditional hysteroscopy, should be used in the work-up of infertility as well.

  2. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  3. Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments

    PubMed Central

    Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed

    2013-01-01

    In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135

  4. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  5. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less

  6. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-01-01

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747

  7. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    PubMed Central

    2011-01-01

    Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357

  8. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  9. The calculation of accurate harmonic frequencies of large molecules: the polycyclic aromatic hydrocarbons, a case study

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W.; Langhoff, Stephen R.

    1997-07-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Møller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral polycyclic aromatic hydrocarbons (PAHs) where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  10. The importance of dew in watershed-management research

    Treesearch

    James W. Hornbeck

    1964-01-01

    Many studies, using various methods, have been made of dew deposition to determine its importance as a source of moisture. For example, Duvdevani (1947) used an optical method in which dew collected on a wooden block was compared with a set of standardized photographs of dew. Potvin (1949) exposed diamond-shaped glass plates at 45º to ground level, so that...

  11. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  12. A simulation study on Bayesian Ridge regression models for several collinearity levels

    NASA Astrophysics Data System (ADS)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  13. Is HO3 minimum cis or trans? An analytic full-dimensional ab initio isomerization path.

    PubMed

    Varandas, A J C

    2011-05-28

    The minimum energy path for isomerization of HO(3) has been explored in detail using accurate high-level ab initio methods and techniques for extrapolation to the complete basis set limit. In agreement with other reports, the best estimates from both valence-only and all-electron single-reference methods here utilized predict the minimum of the cis-HO(3) isomer to be deeper than the trans-HO(3) one. They also show that the energy varies by less than 1 kcal mol(-1) or so over the full isomerization path. A similar result is found from valence-only multireference configuration interaction calculations with the size-extensive Davidson correction and a correlation consistent triple-zeta basis, which predict the energy difference between the two isomers to be of only Δ = -0.1 kcal mol(-1). However, single-point multireference calculations carried out at the optimum triple-zeta geometry with basis sets of the correlation consistent family but cardinal numbers up to X = 6 lead upon a dual-level extrapolation to the complete basis set limit of Δ = (0.12 ± 0.05) kcal mol(-1). In turn, extrapolations with the all-electron single-reference coupled-cluster method including the perturbative triples correction yield values of Δ = -0.19 and -0.03 kcal mol(-1) when done from triple-quadruple and quadruple-quintuple zeta pairs with two basis sets of increasing quality, namely cc-cpVXZ and aug-cc-pVXZ. Yet, if added a value of 0.25 kcal mol(-1) that accounts for the effect of triple and perturbative quadruple excitations with the VTZ basis set, one obtains a coupled cluster estimate of Δ = (0.14 ± 0.08) kcal mol(-1). It is then shown for the first time from systematic ab initio calculations that the trans-HO(3) isomer is more stable than the cis one, in agreement with the available experimental evidence. Inclusion of the best reported zero-point energy difference (0.382 kcal mol(-1)) from multireference configuration interaction calculations enhances further the relative stability to ΔE(ZPE) = (0.51 ± 0.08) kcal mol(-1). A scheme is also suggested to model the full-dimensional isomerization potential-energy surface using a quadratic expansion that is parametrically represented by a Fourier analysis in the torsion angle. The method illustrated at the raw and complete basis-set limit coupled-cluster levels can provide a valuable tool for a future analysis of the available (incomplete thus far) experimental rovibrational data. This journal is © the Owner Societies 2011

  14. An Efficient Method to Evaluate Intermolecular Interaction Energies in Large Systems Using Overlapping Multicenter ONIOM and the Fragment Molecular Orbital Method

    PubMed Central

    Asada, Naoya; Fedorov, Dmitri G.; Kitaura, Kazuo; Nakanishi, Isao; Merz, Kenneth M.

    2012-01-01

    We propose an approach based on the overlapping multicenter ONIOM to evaluate intermolecular interaction energies in large systems and demonstrate its accuracy on several representative systems in the complete basis set limit at the MP2 and CCSD(T) level of theory. In the application to the intermolecular interaction energy between insulin dimer and 4′-hydroxyacetanilide at the MP2/CBS level, we use the fragment molecular orbital method for the calculation of the entire complex assigned to the lowest layer in three-layer ONIOM. The developed method is shown to be efficient and accurate in the evaluation of the protein-ligand interaction energies. PMID:23050059

  15. Parameterization of DFTB3/3OB for Sulfur and Phosphorus for Chemical and Biological Applications

    PubMed Central

    2015-01-01

    We report the parametrization of the approximate density functional tight binding method, DFTB3, for sulfur and phosphorus. The parametrization is done in a framework consistent with our previous 3OB set established for O, N, C, and H, thus the resulting parameters can be used to describe a broad set of organic and biologically relevant molecules. The 3d orbitals are included in the parametrization, and the electronic parameters are chosen to minimize errors in the atomization energies. The parameters are tested using a fairly diverse set of molecules of biological relevance, focusing on the geometries, reaction energies, proton affinities, and hydrogen bonding interactions of these molecules; vibrational frequencies are also examined, although less systematically. The results of DFTB3/3OB are compared to those from DFT (B3LYP and PBE), ab initio (MP2, G3B3), and several popular semiempirical methods (PM6 and PDDG), as well as predictions of DFTB3 with the older parametrization (the MIO set). In general, DFTB3/3OB is a major improvement over the previous parametrization (DFTB3/MIO), and for the majority cases tested here, it also outperforms PM6 and PDDG, especially for structural properties, vibrational frequencies, hydrogen bonding interactions, and proton affinities. For reaction energies, DFTB3/3OB exhibits major improvement over DFTB3/MIO, due mainly to significant reduction of errors in atomization energies; compared to PM6 and PDDG, DFTB3/3OB also generally performs better, although the magnitude of improvement is more modest. Compared to high-level calculations, DFTB3/3OB is most successful at predicting geometries; larger errors are found in the energies, although the results can be greatly improved by computing single point energies at a high level with DFTB3 geometries. There are several remaining issues with the DFTB3/3OB approach, most notably its difficulty in describing phosphate hydrolysis reactions involving a change in the coordination number of the phosphorus, for which a specific parametrization (3OB/OPhyd) is developed as a temporary solution; this suggests that the current DFTB3 methodology has limited transferability for complex phosphorus chemistry at the level of accuracy required for detailed mechanistic investigations. Therefore, fundamental improvements in the DFTB3 methodology are needed for a reliable method that describes phosphorus chemistry without ad hoc parameters. Nevertheless, DFTB3/3OB is expected to be a competitive QM method in QM/MM calculations for studying phosphorus/sulfur chemistry in condensed phase systems, especially as a low-level method that drives the sampling in a dual-level QM/MM framework. PMID:24803865

  16. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  17. Physical activity on prescription (PAP): self-reported physical activity and quality of life in a Swedish primary care population, 2-year follow-up

    PubMed Central

    Rödjer, Lars; H. Jonsdottir, Ingibjörg; Börjesson, Mats

    2016-01-01

    Objective To study the self-reported level of physical activity (PA) and quality of life (QOL) in patients receiving physical activity on prescription (PAP) for up to 24 months. Design Observational study conducted in a regular healthcare setting. Setting A primary care population in Sweden receiving physical activity on prescription as part of regular care was studied alongside a reference group. Subjects The group comprised 146 patients receiving PAP at two different primary care locations (n = 96 and 50, respectively). The reference group comprised 58 patients recruited from two different primary care centres in the same region. Main outcome measurements We used two self-report questionnaires – the four-level Saltin-Grimby Physical Activity Level Scale (SGPALS) to assess physical activity, and SF-36 to assess QOL. Results A significant increase in the PA level was found at six and 12 months following PAP, with an ongoing non-significant trend at 24 months (p = .09). A clear improvement in QOL was seen during the period. At 24 months, significant and clinically relevant improvements in QOL persisted in four out of eight sub-scale scores (Physical Role Limitation, Bodily Pain, General Health,Vitality) and in one out of two summary scores (Physical Component Summary). Conclusion Patients receiving PAP showed an increased level of self-reported PA at six and 12 months and improved QOL for up to 24 months in several domains. The Swedish PAP method seems to be a feasible method for bringing about changes in physical activity in different patient populations in regular primary healthcare. While increased physical activity (PA) is shown to improve health, the implementation of methods designed to increase activity is still being developed. Key points The present study confirms that the Swedish physical activity on prescription (PAP) method increases the self-reported level of PA in the primary care setting at six and 12 months. Furthermore, this study shows that PAP recipients report a clinically relevant long-term improvement in quality of life, persisting for two years post-prescription, thus extending earlier findings. These findings have clinical implications for the implementation of PAP in healthcare. PMID:27978781

  18. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets

    PubMed Central

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-01-01

    Abstract. Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as w, Δt, z, α, μ, α1, and α2, play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method. PMID:26158101

  19. Development and validation of rapid multiresidue and multi-class analysis for antibiotics and anthelmintics in feed by ultra-high-performance liquid chromatography coupled to tandem mass spectrometry.

    PubMed

    Robert, Christelle; Brasseur, Pierre-Yves; Dubois, Michel; Delahaut, Philippe; Gillard, Nathalie

    2016-08-01

    A new multi-residue method for the analysis of veterinary drugs, namely amoxicillin, chlortetracycline, colistins A and B, doxycycline, fenbendazole, flubendazole, ivermectin, lincomycin, oxytetracycline, sulfadiazine, tiamulin, tilmicosin and trimethoprim, was developed and validated for feed. After acidic extraction, the samples were centrifuged, purified by SPE and analysed by ultra-high-performance liquid chromatography coupled to tandem mass spectrometry. Quantitative validation was done in accordance with the guidelines laid down in European Commission Decision 2002/657/CE. Matrix-matched calibration with internal standards was used to reduce matrix effects. The target level was set at the authorised carryover level (1%) and validation levels were set at 0.5%, 1% and 1.5%. Method performances were evaluated by the following parameters: linearity (0.986 < R(2) < 0.999), precision (repeatability < 12.4% and reproducibility < 14.0%), accuracy (89% < recovery < 107%), sensitivity, decision limit (CCα), detection capability (CCβ), selectivity and expanded measurement uncertainty (k = 2).This method has been used successfully for three years for routine monitoring of antibiotic residues in feeds during which period 20% of samples were found to exceed the 1% authorised carryover limit and were deemed non-compliant.

  20. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    NASA Astrophysics Data System (ADS)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  1. Thresholding functional connectomes by means of mixture modeling.

    PubMed

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Forecasting model of Corylus, Alnus, and Betula pollen concentration levels using spatiotemporal correlation properties of pollen count.

    PubMed

    Nowosad, Jakub; Stach, Alfred; Kasprzyk, Idalia; Weryszko-Chmielewska, Elżbieta; Piotrowska-Weryszko, Krystyna; Puc, Małgorzata; Grewling, Łukasz; Pędziszewska, Anna; Uruska, Agnieszka; Myszkowska, Dorota; Chłopek, Kazimiera; Majkowska-Wojciechowska, Barbara

    The aim of the study was to create and evaluate models for predicting high levels of daily pollen concentration of Corylus , Alnus , and Betula using a spatiotemporal correlation of pollen count. For each taxon, a high pollen count level was established according to the first allergy symptoms during exposure. The dataset was divided into a training set and a test set, using a stratified random split. For each taxon and city, the model was built using a random forest method. Corylus models performed poorly. However, the study revealed the possibility of predicting with substantial accuracy the occurrence of days with high pollen concentrations of Alnus and Betula using past pollen count data from monitoring sites. These results can be used for building (1) simpler models, which require data only from aerobiological monitoring sites, and (2) combined meteorological and aerobiological models for predicting high levels of pollen concentration.

  3. Nested Tracking Graphs

    DOE PAGES

    Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...

    2017-06-01

    Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less

  4. Comparing volume of fluid and level set methods for evaporating liquid-gas flows

    NASA Astrophysics Data System (ADS)

    Palmore, John; Desjardins, Olivier

    2016-11-01

    This presentation demonstrates three numerical strategies for simulating liquid-gas flows undergoing evaporation. The practical aim of this work is to choose a framework capable of simulating the combustion of liquid fuels in an internal combustion engine. Each framework is analyzed with respect to its accuracy and computational cost. All simulations are performed using a conservative, finite volume code for simulating reacting, multiphase flows under the low-Mach assumption. The strategies used in this study correspond to different methods for tracking the liquid-gas interface and handling the transport of the discontinuous momentum and vapor mass fractions fields. The first two strategies are based on conservative, geometric volume of fluid schemes using directionally split and un-split advection, respectively. The third strategy is the accurate conservative level set method. For all strategies, special attention is given to ensuring the consistency between the fluxes of mass, momentum, and vapor fractions. The study performs three-dimensional simulations of an isolated droplet of a single component fuel evaporating into air. Evaporation rates and vapor mass fractions are compared to analytical results.

  5. An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.

    PubMed

    Kim, Junghi; Bai, Yun; Pan, Wei

    2015-12-01

    We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods. © 2015 WILEY PERIODICALS, INC.

  6. Environmental optimal control strategies based on plant canopy photosynthesis responses and greenhouse climate model

    NASA Astrophysics Data System (ADS)

    Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao

    2006-11-01

    It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse

  7. A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth

    PubMed Central

    Macklin, Paul

    2011-01-01

    In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304

  8. Limited capacity for contour curvature in iconic memory.

    PubMed

    Sakai, Koji

    2006-06-01

    We measured the difference threshold for contour curvature in iconic memory by using the cued discrimination method. The study stimulus consisting of 2 to 6 curved contours was briefly presented in the fovea, followed by two lines as cues. Subjects discriminated the curvature of two cued curves. The cue delays were 0 msec. and 300 msec. in Exps. 1 and 2, respectively, and 50 msec. before the study offset in Exp. 3. Analysis of data from Exps. 1 and 2 showed that the Weber fraction rose monotonically with the increase in set size. Clear set-size effects indicate that iconic memory has a limited capacity. Moreover, clear set-size effect in Exp. 3 indicates that perception itself has a limited capacity. Larger set-size effects in Exp. 1 than in Exp. 3 suggest that iconic memory after perceptual process has limited capacity. These properties of iconic memory at threshold level are contradictory to the traditional view that iconic memory has a high capacity both at suprathreshold and categorical levels.

  9. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard

    2015-01-01

    In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).

  11. Metabolic control in a nationally representative diabetic elderly sample in Costa Rica: patients at community health centers vs. patients at other health care settings

    PubMed Central

    Brenes-Camacho, Gilbert; Rosero-Bixby, Luis

    2008-01-01

    Background Costa Rica, like other developing countries, is experiencing an increasing burden of chronic conditions such as diabetes mellitus (DM), especially among its elderly population. This article has two goals: (1) to assess the level of metabolic control among the diabetic population age ≥ 60 years old in Costa Rica, and (2) to test whether diabetic elderly patients of community health centers differ from patients in other health care settings in terms of the level of metabolic control. Methods Data come from the project CRELES, a nationally representative study of people aged 60 and over in Costa Rica. This article analyzes a subsample of 542 participants in CRELES with self-reported diagnosis of diabetes mellitus. Odds ratios of poor levels of metabolic control at different health care settings are computed using logistic regressions. Results Lack of metabolic control among elderly diabetic population in Costa Rica is described as follows: 37% have glycated hemoglobin ≥ 7%; 78% have systolic blood pressure ≥ 130 mmHg; 66% have diastolic blood pressure ≥ 80 mmHg; 48% have triglycerides ≥ 150 mg/dl; 78% have LDL ≥ 100 mg/dl; 70% have HDL ≤ 40 mg/dl. Elevated levels of triglycerides and LDL were higher in patients of community health centers than in patients of other clinical settings. There were no statistical differences in the other metabolic control indicators across health care settings. Conclusion Levels of metabolic control among elderly population with DM in Costa Rica are not that different from those observed in industrialized countries. Elevated levels of triglycerides and LDL at community health centers may indicate problems of dyslipidemia treatment among diabetic patients; these problems are not observed in other health care settings. The Costa Rican health care system should address this problem, given that community health centers constitute a means of democratizing access to primary health care to underserved and poor areas. PMID:18447930

  12. Radiative data for highly excited 3d84d levels in Ni II from laboratory measurements and atomic calculations

    NASA Astrophysics Data System (ADS)

    Hartman, H.; Engström, L.; Lundberg, H.; Nilsson, H.; Quinet, P.; Fivet, V.; Palmeri, P.; Malcheva, G.; Blagoev, K.

    2017-04-01

    Aims: This work reports new experimental radiative lifetimes and calculated oscillator strengths for transitions from 3d84d levels of astrophysical interest in singly ionized nickel. Methods: Radiative lifetimes of seven high-lying levels of even parity in Ni II (98 400-100 600 cm-1) have been measured using the time-resolved laser-induced fluorescence method. Two-step photon excitation of ions produced by laser ablation has been utilized to populate the levels. Theoretical calculations of the radiative lifetimes of the measured levels and transition probabilities from these levels are reported. The calculations have been performed using a pseudo-relativistic Hartree-Fock method, taking into account core polarization effects. Results: A new set of transition probabilities and oscillator strengths has been deduced for 477 Ni II transitions of astrophysical interest in the spectral range 194-520 nm depopulating even parity 3d84d levels. The new calculated gf-values are, on the average, about 20% higher than a previous calculation and yield lifetimes within 5% of the experimental values.

  13. Estimating the intrinsic limit of the Feller-Peterson-Dixon composite approach when applied to adiabatic ionization potentials in atoms and small molecules

    NASA Astrophysics Data System (ADS)

    Feller, David

    2017-07-01

    Benchmark adiabatic ionization potentials were obtained with the Feller-Peterson-Dixon (FPD) theoretical method for a collection of 48 atoms and small molecules. In previous studies, the FPD method demonstrated an ability to predict atomization energies (heats of formation) and electron affinities well within a 95% confidence level of ±1 kcal/mol. Large 1-particle expansions involving correlation consistent basis sets (up to aug-cc-pV8Z in many cases and aug-cc-pV9Z for some atoms) were chosen for the valence CCSD(T) starting point calculations. Despite their cost, these large basis sets were chosen in order to help minimize the residual basis set truncation error and reduce dependence on approximate basis set limit extrapolation formulas. The complementary n-particle expansion included higher order CCSDT, CCSDTQ, or CCSDTQ5 (coupled cluster theory with iterative triple, quadruple, and quintuple excitations) corrections. For all of the chemical systems examined here, it was also possible to either perform explicit full configuration interaction (CI) calculations or to otherwise estimate the full CI limit. Additionally, corrections associated with core/valence correlation, scalar relativity, anharmonic zero point vibrational energies, non-adiabatic effects, and other minor factors were considered. The root mean square deviation with respect to experiment for the ionization potentials was 0.21 kcal/mol (0.009 eV). The corresponding level of agreement for molecular enthalpies of formation was 0.37 kcal/mol and for electron affinities 0.20 kcal/mol. Similar good agreement with experiment was found in the case of molecular structures and harmonic frequencies. Overall, the combination of energetic, structural, and vibrational data (655 comparisons) reflects the consistent ability of the FPD method to achieve close agreement with experiment for small molecules using the level of theory applied in this study.

  14. Building a citywide, all-payer, hospital claims database to improve health care delivery in a low-income, urban community.

    PubMed

    Gross, Kennen; Brenner, Jeffrey C; Truchil, Aaron; Post, Ernest M; Riley, Amy Henderson

    2013-01-01

    Developing data-driven local solutions to address rising health care costs requires valid and reliable local data. Traditionally, local public health agencies have relied on birth, death, and specific disease registry data to guide health care planning, but these data sets provide neither health information across the lifespan nor information on local health care utilization patterns and costs. Insurance claims data collected by local hospitals for administrative purposes can be used to create valuable population health data sets. The Camden Coalition of Healthcare Providers partnered with the 3 health systems providing emergency and inpatient care within Camden, New Jersey, to create a local population all-payer hospital claims data set. The combined claims data provide unique insights into the health status, health care utilization patterns, and hospital costs on the population level. The cross-systems data set allows for a better understanding of the impact of high utilizers on a community-level health care system. This article presents an introduction to the methods used to develop Camden's hospital claims data set, as well as results showing the population health insights obtained from this unique data set.

  15. Preadolescent Strength Training.

    ERIC Educational Resources Information Center

    Smith, Timothy K.

    1984-01-01

    Physical educators must teach preadolescents about safe and realistic strength-training methods commensurate with their needs and physical capabilities. The risk of injuries can be reduced by setting prudent goals, using equipment tailored to the age level, and educating students about their unique growth state. (PP)

  16. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2017-06-01

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Emergency and urgent care capacity in a resource-limited setting: an assessment of health facilities in western Kenya

    PubMed Central

    Burke, Thomas F; Hines, Rosemary; Ahn, Roy; Walters, Michelle; Young, David; Anderson, Rachel Eleanor; Tom, Sabrina M; Clark, Rachel; Obita, Walter; Nelson, Brett D

    2014-01-01

    Objective Injuries, trauma and non-communicable diseases are responsible for a rising proportion of death and disability in low-income and middle-income countries. Delivering effective emergency and urgent healthcare for these and other conditions in resource-limited settings is challenging. In this study, we sought to examine and characterise emergency and urgent care capacity in a resource-limited setting. Methods We conducted an assessment within all 30 primary and secondary hospitals and within a stratified random sampling of 30 dispensaries and health centres in western Kenya. The key informants were the most senior facility healthcare provider and manager available. Emergency physician researchers utilised a semistructured assessment tool, and data were analysed using descriptive statistics and thematic coding. Results No lower level facilities and 30% of higher level facilities reported having a defined, organised approach to trauma. 43% of higher level facilities had access to an anaesthetist. The majority of lower level facilities had suture and wound care supplies and gloves but typically lacked other basic trauma supplies. For cardiac care, 50% of higher level facilities had morphine, but a minority had functioning ECG, sublingual nitroglycerine or a defibrillator. Only 20% of lower level facilities had glucometers, and only 33% of higher level facilities could care for diabetic emergencies. No facilities had sepsis clinical guidelines. Conclusions Large gaps in essential emergency care capabilities were identified at all facility levels in western Kenya. There are great opportunities for a universally deployed basic emergency care package, an advanced emergency care package and facility designation scheme, and a reliable prehospital care transportation and communications system in resource-limited settings. PMID:25260371

  18. Multipolar electrostatics based on the Kriging machine learning method: an application to serine.

    PubMed

    Yuan, Yongna; Mills, Matthew J L; Popelier, Paul L A

    2014-04-01

    A multipolar, polarizable electrostatic method for future use in a novel force field is described. Quantum Chemical Topology (QCT) is used to partition the electron density of a chemical system into atoms, then the machine learning method Kriging is used to build models that relate the multipole moments of the atoms to the positions of their surrounding nuclei. The pilot system serine is used to study both the influence of the level of theory and the set of data generator methods used. The latter consists of: (i) sampling of protein structures deposited in the Protein Data Bank (PDB), or (ii) normal mode distortion along either (a) Cartesian coordinates, or (b) redundant internal coordinates. Wavefunctions for the sampled geometries were obtained at the HF/6-31G(d,p), B3LYP/apc-1, and MP2/cc-pVDZ levels of theory, prior to calculation of the atomic multipole moments by volume integration. The average absolute error (over an independent test set of conformations) in the total atom-atom electrostatic interaction energy of serine, using Kriging models built with the three data generator methods is 11.3 kJ mol⁻¹ (PDB), 8.2 kJ mol⁻¹ (Cartesian distortion), and 10.1 kJ mol⁻¹ (redundant internal distortion) at the HF/6-31G(d,p) level. At the B3LYP/apc-1 level, the respective errors are 7.7 kJ mol⁻¹, 6.7 kJ mol⁻¹, and 4.9 kJ mol⁻¹, while at the MP2/cc-pVDZ level they are 6.5 kJ mol⁻¹, 5.3 kJ mol⁻¹, and 4.0 kJ mol⁻¹. The ranges of geometries generated by the redundant internal coordinate distortion and by extraction from the PDB are much wider than the range generated by Cartesian distortion. The atomic multipole moment and electrostatic interaction energy predictions for the B3LYP/apc-1 and MP2/cc-pVDZ levels are similar, and both are better than the corresponding predictions at the HF/6-31G(d,p) level.

  19. Heap/stack guard pages using a wakeup unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M; Satterfield, David L; Steinmacher-Burow, Burkhard

    A method and system for providing a memory access check on a processor including the steps of detecting accesses to a memory device including level-1 cache using a wakeup unit. The method includes invalidating level-1 cache ranges corresponding to a guard page, and configuring a plurality of wakeup address compare (WAC) registers to allow access to selected WAC registers. The method selects one of the plurality of WAC registers, and sets up a WAC register related to the guard page. The method configures the wakeup unit to interrupt on access of the selected WAC register. The method detects access ofmore » the memory device using the wakeup unit when a guard page is violated. The method generates an interrupt to the core using the wakeup unit, and determines the source of the interrupt. The method detects the activated WAC registers assigned to the violated guard page, and initiates a response.« less

  20. Fourier transform methods in local gravity modeling

    NASA Technical Reports Server (NTRS)

    Harrison, J. C.; Dickinson, M.

    1989-01-01

    New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.

  1. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  2. Global gray-level thresholding based on object size.

    PubMed

    Ranefall, Petter; Wählby, Carolina

    2016-04-01

    In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  3. Learn from every mistake! Hierarchical information combination in astronomy

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Fotopoulou, Sotiria; Coupon, Jean; Paltani, Stéphane; Eyer, Laurent; Rimoldini, Lorenzo

    2017-06-01

    Throughout the processing and analysis of survey data, a ubiquitous issue nowadays is that we are spoilt for choice when we need to select a methodology for some of its steps. The alternative methods usually fail and excel in different data regions, and have various advantages and drawbacks, so a combination that unites the strengths of all while suppressing the weaknesses is desirable. We propose to use a two-level hierarchy of learners. Its first level consists of training and applying the possible base methods on the first part of a known set. At the second level, we feed the output probability distributions from all base methods to a second learner trained on the remaining known objects. Using classification of variable stars and photometric redshift estimation as examples, we show that the hierarchical combination is capable of achieving general improvement over averaging-type combination methods, correcting systematics present in all base methods, is easy to train and apply, and thus, it is a promising tool in the astronomical ``Big Data'' era.

  4. Analytical performance of glucose monitoring systems at different blood glucose ranges and analysis of outliers in a clinical setting.

    PubMed

    Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel

    2014-05-01

    We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.

  5. Hyperspectral data discrimination methods

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Chen, Xuewen

    2000-12-01

    Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.

  6. The British in Kenya (1952-1960): Analysis of a Successful Counterinsurgency Campaign

    DTIC Science & Technology

    2005-06-01

    East Africa to seek their riches in cattle , coffee, mining, and selling safaris to tourists. While the other colonial powers set their sights on...violence on a large scale by early 1952. Cattle on white settlements were being destroyed and standing crops and haystacks set on fire, particularly...Guards used pliers to castrate Mau Mau prisoners. Whatever the method and level of brutality, by the latter half of the 1950s most Kikuyu had turned

  7. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    NASA Astrophysics Data System (ADS)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  8. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: dispersion, induction, and basis set superposition error.

    PubMed

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J

    2012-10-07

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  9. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    PubMed Central

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-01-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587

  10. Biomedical image segmentation using geometric deformable models and metaheuristics.

    PubMed

    Mesejo, Pablo; Valsecchi, Andrea; Marrakchi-Kacem, Linda; Cagnoni, Stefano; Damas, Sergio

    2015-07-01

    This paper describes a hybrid level set approach for medical image segmentation. This new geometric deformable model combines region- and edge-based information with the prior shape knowledge introduced using deformable registration. Our proposal consists of two phases: training and test. The former implies the learning of the level set parameters by means of a Genetic Algorithm, while the latter is the proper segmentation, where another metaheuristic, in this case Scatter Search, derives the shape prior. In an experimental comparison, this approach has shown a better performance than a number of state-of-the-art methods when segmenting anatomical structures from different biomedical image modalities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Quantum chemical calculations of glycine glutaric acid

    NASA Astrophysics Data System (ADS)

    Arioǧlu, ćaǧla; Tamer, Ömer; Avci, Davut; Atalay, Yusuf

    2017-02-01

    Density functional theory (DFT) calculations of glycine glutaric acid were performed by using B3LYP levels with 6-311++G(d,p) basis set. The theoretical structural parameters such as bond lengths and bond angles are in a good agreement with the experimental values of the title compound. HOMO and LUMO energies were calculated, and the obtained energy gap shows that charge transfer occurs in the title compound. Vibrational frequencies were calculated and compare with experimental ones. 3D molecular surfaces of the title compound were simulated using the same level and basis set. Finally, the 13C and 1H NMR chemical shift values were calculated by the application of the gauge independent atomic orbital (GIAO) method.

  12. Systems, methods and apparatus for implementation of formal specifications derived from informal requirements

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments an informal specification is translated without human intervention into a formal specification. In some embodiments the formal specification is a process-based specification. In some embodiments, the formal specification is translated into a high-level computer programming language which is further compiled into a set of executable computer instructions.

  13. Curriculum-Based Measurement of Oral Reading: Evaluation of Growth Estimates Derived with Pre-Post Assessment Methods

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Monaghen, Barbara D.; Zopluoglu, Cengiz; Van Norman, Ethan R.

    2013-01-01

    Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre-post measurement using CBM-R data. A…

  14. Using Inquiry-Based Instruction to Teach Research Methods to 4th-Grade Students in an Urban Setting

    ERIC Educational Resources Information Center

    Hamm, Ellen M.; Cullen, Rebecca; Ciaravino, Melissa

    2013-01-01

    When a college professor who teaches research methods to graduate education students was approached by a local public urban elementary school to help them teach research skills to 4th-graders, it was thought that the process would be simple--take what we did at the college level and differentiate it for the childhood classroom. This article will…

  15. Global Topology Optimisation

    DTIC Science & Technology

    2016-10-31

    statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally

  16. Evaluation Method for Low-Temperature Performance of Lithium Battery

    NASA Astrophysics Data System (ADS)

    Wang, H. W.; Ma, Q.; Fu, Y. L.; Tao, Z. Q.; Xiao, H. Q.; Bai, H.; Bai, H.

    2018-05-01

    In this paper, the evaluation method for low temperature performance of lithium battery is established. The low temperature performance level was set up to determine the best operating temperature range of the lithium battery using different cathode materials. Results are shared with the consumers for the proper use of lithium battery to make it have a longer service life and avoid the occurrence of early rejection.

  17. Feature Selection and Classifier Development for Radio Frequency Device Identification

    DTIC Science & Technology

    2015-12-01

    adds important background knowledge for this research . 41 Four leading RF-based device identification methods have been proposed: Radio...appropriate level of dimensionality. Both qualitative and quantitative DRA dimensionality assessment methods are possible. Prior RF-DNA DRA research , e.g...Employing experimental designs to find optimal algorithm settings has been seen in hyperspectral anomaly detection research , c.f. [513–520], but not

  18. Factors Associated with School Teachers' Perceived Needs and Level of Adoption of HIV Prevention Education in Lusaka, Zambia

    ERIC Educational Resources Information Center

    Henning, Margaret; Chi, Chunheui; Khanna, Sunil K.

    2011-01-01

    Objective: The purpose of this study was to evaluate the socio-cultural variables that may influence teachers' adoption of classroom-based HIV/AIDS education within the school setting and among school types in Zambia's Lusaka Province. Method: Mixed methods were used to collect original data. Using semi-structured interviews (n=11) and a survey…

  19. Unsupervised daily routine and activity discovery in smart homes.

    PubMed

    Jie Yin; Qing Zhang; Karunanithi, Mohan

    2015-08-01

    The ability to accurately recognize daily activities of residents is a core premise of smart homes to assist with remote health monitoring. Most of the existing methods rely on a supervised model trained from a preselected and manually labeled set of activities, which are often time-consuming and costly to obtain in practice. In contrast, this paper presents an unsupervised method for discovering daily routines and activities for smart home residents. Our proposed method first uses a Markov chain to model a resident's locomotion patterns at different times of day and discover clusters of daily routines at the macro level. For each routine cluster, it then drills down to further discover room-level activities at the micro level. The automatic identification of daily routines and activities is useful for understanding indicators of functional decline of elderly people and suggesting timely interventions.

  20. Network-Based Identification of Adaptive Pathways in Evolved Ethanol-Tolerant Bacterial Populations

    PubMed Central

    Swings, Toon; Weytjens, Bram; Schalck, Thomas; Bonte, Camille; Verstraeten, Natalie; Michiels, Jan

    2017-01-01

    Abstract Efficient production of ethanol for use as a renewable fuel requires organisms with a high level of ethanol tolerance. However, this trait is complex and increased tolerance therefore requires mutations in multiple genes and pathways. Here, we use experimental evolution for a system-level analysis of adaptation of Escherichia coli to high ethanol stress. As adaptation to extreme stress often results in complex mutational data sets consisting of both causal and noncausal passenger mutations, identifying the true adaptive mutations in these settings is not trivial. Therefore, we developed a novel method named IAMBEE (Identification of Adaptive Mutations in Bacterial Evolution Experiments). IAMBEE exploits the temporal profile of the acquisition of mutations during evolution in combination with the functional implications of each mutation at the protein level. These data are mapped to a genome-wide interaction network to search for adaptive mutations at the level of pathways. The 16 evolved populations in our data set together harbored 2,286 mutated genes with 4,470 unique mutations. Analysis by IAMBEE significantly reduced this number and resulted in identification of 90 mutated genes and 345 unique mutations that are most likely to be adaptive. Moreover, IAMBEE not only enabled the identification of previously known pathways involved in ethanol tolerance, but also identified novel systems such as the AcrAB-TolC efflux pump and fatty acids biosynthesis and even allowed to gain insight into the temporal profile of adaptation to ethanol stress. Furthermore, this method offers a solid framework for identifying the molecular underpinnings of other complex traits as well. PMID:28961727

Top