Sample records for order level set

  1. An arbitrary-order Runge–Kutta discontinuous Galerkin approach to reinitialization for banded conservative level sets

    DOE PAGES

    Jibben, Zechariah Joel; Herrmann, Marcus

    2017-08-24

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibben, Zechariah Joel; Herrmann, Marcus

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  3. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  4. High-Order Discontinuous Galerkin Level Set Method for Interface Tracking and Re-Distancing on Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Greene, Patrick; Nourgaliev, Robert; Schofield, Sam

    2015-11-01

    A new sharp high-order interface tracking method for multi-material flow problems on unstructured meshes is presented. The method combines the marker-tracking algorithm with a discontinuous Galerkin (DG) level set method to implicitly track interfaces. DG projection is used to provide a mapping from the Lagrangian marker field to the Eulerian level set field. For the level set re-distancing, we developed a novel marching method that takes advantage of the unique features of the DG representation of the level set. The method efficiently marches outward from the zero level set with values in the new cells being computed solely from cell neighbors. Results are presented for a number of different interface geometries including ones with sharp corners and multiple hierarchical level sets. The method can robustly handle the level set discontinuities without explicit utilization of solution limiters. Results show that the expected high order (3rd and higher) of convergence for the DG representation of the level set is obtained for smooth solutions on unstructured meshes. High-order re-distancing on irregular meshes is a must for applications were the interfacial curvature is important for underlying physics, such as surface tension, wetting and detonation shock dynamics. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Information management release number LLNL-ABS-675636.

  5. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  6. Level set methods for detonation shock dynamics using high-order finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Grogan, F. C.; Kolev, T. V.

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two-more » and three-dimensional benchmark problems as well as applications to DSD.« less

  7. Boosting standard order sets utilization through clinical decision support.

    PubMed

    Li, Haomin; Zhang, Yinsheng; Cheng, Haixia; Lu, Xudong; Duan, Huilong

    2013-01-01

    Well-designed standard order sets have the potential to integrate and coordinate care by communicating best practices through multiple disciplines, levels of care, and services. However, there are several challenges which certainly affected the benefits expected from standard order sets. To boost standard order sets utilization, a problem-oriented knowledge delivery solution was proposed in this study to facilitate access of standard order sets and evaluation of its treatment effect. In this solution, standard order sets were created along with diagnostic rule sets which can trigger a CDS-based reminder to help clinician quickly discovery hidden clinical problems and corresponding standard order sets during ordering. Those rule set also provide indicators for targeted evaluation of standard order sets during treatment. A prototype system was developed based on this solution and will be presented at Medinfo 2013.

  8. Some New Sets of Sequences of Fuzzy Numbers with Respect to the Partial Metric

    PubMed Central

    Ozluk, Muharrem

    2015-01-01

    In this paper, we essentially deal with Köthe-Toeplitz duals of fuzzy level sets defined using a partial metric. Since the utilization of Zadeh's extension principle is quite difficult in practice, we prefer the idea of level sets in order to construct some classical notions. In this paper, we present the sets of bounded, convergent, and null series and the set of sequences of bounded variation of fuzzy level sets, based on the partial metric. We examine the relationships between these sets and their classical forms and give some properties including definitions, propositions, and various kinds of partial metric spaces of fuzzy level sets. Furthermore, we study some of their properties like completeness and duality. Finally, we obtain the Köthe-Toeplitz duals of fuzzy level sets with respect to the partial metric based on a partial ordering. PMID:25695102

  9. Discovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em

    2017-11-01

    The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.

  10. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  11. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  12. A discontinuous Galerkin conservative level set scheme for interface capturing in multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owkes, Mark, E-mail: mfc86@cornell.edu; Desjardins, Olivier

    2013-09-15

    The accurate conservative level set (ACLS) method of Desjardins et al. [O. Desjardins, V. Moureau, H. Pitsch, An accurate conservative level set/ghost fluid method for simulating turbulent atomization, J. Comput. Phys. 227 (18) (2008) 8395–8416] is extended by using a discontinuous Galerkin (DG) discretization. DG allows for the scheme to have an arbitrarily high order of accuracy with the smallest possible computational stencil resulting in an accurate method with good parallel scaling. This work includes a DG implementation of the level set transport equation, which moves the level set with the flow field velocity, and a DG implementation of themore » reinitialization equation, which is used to maintain the shape of the level set profile to promote good mass conservation. A near second order converging interface curvature is obtained by following a height function methodology (common amongst volume of fluid schemes) in the context of the conservative level set. Various numerical experiments are conducted to test the properties of the method and show excellent results, even on coarse meshes. The tests include Zalesak’s disk, two-dimensional deformation of a circle, time evolution of a standing wave, and a study of the Kelvin–Helmholtz instability. Finally, this novel methodology is employed to simulate the break-up of a turbulent liquid jet.« less

  13. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  14. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  15. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  16. Evaluation of Model Specification, Variable Selection, and Adjustment Methods in Relation to Propensity Scores and Prognostic Scores in Multilevel Data

    ERIC Educational Resources Information Center

    Yu, Bing; Hong, Guanglei

    2012-01-01

    This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…

  17. Improving geriatric prescribing in the ED: a qualitative study of facilitators and barriers to clinical decision support tool use.

    PubMed

    Vandenberg, Ann E; Vaughan, Camille P; Stevens, Melissa; Hastings, Susan N; Powers, James; Markland, Alayne; Hwang, Ula; Hung, William; Echt, Katharina V

    2017-02-01

    Clinical decision support (CDS) may improve prescribing for older adults in the Emergency Department (ED) if adopted by providers. Existing prescribing order entry processes were mapped at an initial Veterans Administration Medical Center site, demonstrating cognitive burden, effort and safety concerns. Geriatric order sets incorporating 2012 Beers guidelines and including geriatric prescribing advice and prepopulated order options were developed. Geriatric order sets were implemented at two sites as part of the multicomponent 'Enhancing Quality of Prescribing Practices for Older Veterans Discharged from the Emergency Department' quality improvement initiative. Facilitators and barriers to order sets use at the two sites were evaluated. Phone interviews were conducted with two provider groups (n = 20), those 'EQUiPPED' with the interventions (n = 10, 5 at each site) and Comparison providers who were only exposed to order sets through a clickable option on the ED order menu within the patient's medical record (n = 10, 5 at each site). All providers were asked about order set 'use' and 'usefulness'. Users (n = 11) were asked about 'usability'. Order set adopters described 'usefulness' in terms of 'safety' and 'efficiency', whereas order set consultants and order set non-users described 'usefulness' in terms of 'information' or 'training'. Provider 'autonomy', 'comfort' level with existing tools, and 'learning curve' were stated as barriers to use. Quantifying efficiency advantages and communicating safety benefit over preexisting practices and tools may improve adoption of CDS in ED and in other settings of care. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  18. Auxiliary basis sets for density-fitting second-order Møller-Plesset perturbation theory: weighted core-valence correlation consistent basis sets for the 4d elements Y-Pd.

    PubMed

    Hill, J Grant

    2013-09-30

    Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit. Copyright © 2013 Wiley Periodicals, Inc.

  19. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  20. Dynamic characteristics of a multi-wavelength Brillouin-Raman fiber laser assisted by multiple four-wave mixing processes in a ring cavity

    NASA Astrophysics Data System (ADS)

    Shirazi, M. R.; Mohamed Taib, J.; De La Rue, R. M.; Harun, S. W.; Ahmad, H.

    2015-03-01

    Dynamic characteristics of a multi-wavelength Brillouin-Raman fiber laser (MBRFL) assisted by four-wave mixing have been investigated through the development of Stokes and anti-Stokes lines under different combinations of Brillouin and Raman pump power levels and different Raman pumping schemes in a ring cavity. For a Stokes line of order higher than three, the threshold power was less than the saturation power of its last-order Stokes line. By increasing the Brillouin pump power, the nth order anti-Stokes and the (n+4)th order Stokes power levels were unexpectedly increased almost the same before the Stokes line threshold power. It was also found out that the SBS threshold reduction (SBSTR) depended linearly on the gain factor for the 1st and 2nd Stokes lines, as the first set. This relation for the 3rd and 4th Stokes lines as the second set, however, was almost linear with the same slope before SBSTR -6 dB, then, it approached to the linear relation in the first set when the gain factor was increased to 50 dB. Therefore, the threshold power levels of Stokes lines for a given Raman gain can be readily estimated only by knowing the threshold power levels in which there is no Raman amplification.

  1. Basis set and electron correlation effects on the polarizability and second hyperpolarizability of model open-shell π-conjugated systems

    NASA Astrophysics Data System (ADS)

    Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi

    2005-03-01

    The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.

  2. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  3. Computerized Provider Order Entry and Health Care Quality on Hospital Level among Pediatric Patients during 2006-2009

    ERIC Educational Resources Information Center

    Wang, Liya

    2016-01-01

    This study examined the association between Computerized Physician Order Entry (CPOE) application and healthcare quality in pediatric patients at hospital level. This was a retrospective study among 1,428 hospitals with pediatric setting in Healthcare Cost and Utilization Project (HCUP) Kid's Inpatient Database (KID) and Health Information and…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broda, Jill Terese

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the samemore » order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.« less

  5. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  6. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  7. Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Sethian, James A.

    2006-01-01

    Borrowing from techniques developed for conservation law equations, we have developed both monotone and higher order accurate numerical schemes which discretize the Hamilton-Jacobi and level set equations on triangulated domains. The use of unstructured meshes containing triangles (2D) and tetrahedra (3D) easily accommodates mesh adaptation to resolve disparate level set feature scales with a minimal number of solution unknowns. The minisymposium talk will discuss these algorithmic developments and present sample calculations using our adaptive triangulation algorithm applied to various moving interface problems such as etching, deposition, and curvature flow.

  8. Refined energetic ordering for sulphate-water (n = 3-6) clusters using high-level electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lambrecht, Daniel S.; McCaslin, Laura; Xantheas, Sotiris S.; Epifanovsky, Evgeny; Head-Gordon, Martin

    2012-10-01

    This work reports refinements of the energetic ordering of the known low-energy structures of sulphate-water clusters ? (n = 3-6) using high-level electronic structure methods. Coupled cluster singles and doubles with perturbative triples (CCSD(T)) is used in combination with an estimate of basis set effects up to the complete basis set limit using second-order Møller-Plesset theory. Harmonic zero-point energy (ZPE), included at the B3LYP/6-311 + + G(3df,3pd) level, was found to have a significant effect on the energetic ordering. In fact, we show that the energetic ordering is a result of a delicate balance between the electronic and vibrational energies. Limitations of the ZPE calculations, both due to electronic structure errors, and use of the harmonic approximation, probably constitute the largest remaining errors. Due to the often small energy differences between cluster isomers, and the significant role of ZPE, deuteration can alter the relative energies of low-lying structures, and, when it is applied in conjunction with calculated harmonic ZPEs, even alters the global minimum for n = 5. Experiments on deuterated clusters, as well as more sophisticated vibrational calculations, may therefore be quite interesting.

  9. An original concurrent resolution setting forth the congressional budget for the United States Government for fiscal year 2011, revising the appropriate budgetary levels for fiscal year 2010, and setting forth the appropriate budgetary levels for fiscal years 2012 through 2015.

    THOMAS, 111th Congress

    Sen. Conrad, Kent [D-ND

    2010-04-26

    Senate - 04/26/2010 Placed on Senate Legislative Calendar under General Orders. Calendar No. 358. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  10. The Relation of Birth Order, Social Class, and Need Achievement to Independent Judgement

    ERIC Educational Resources Information Center

    Rhine, W. Ray

    1974-01-01

    This article reports an investigation in which the brith order, social class, and level of achievement arousal are the variables considered when fifth and sixth-grade girls make independent judgements in performing a set task. (JH)

  11. Depositional sequence stratigraphy and architecture of the cretaceous ferron sandstone: Implications for coal and coalbed methane resources - A field excursion

    USGS Publications Warehouse

    Garrison, J.R.; Van Den, Bergh; Barker, C.E.; Tabet, D.E.

    1997-01-01

    This Field Excursion will visit outcrops of the fluvial-deltaic Upper Cretaceous (Turonian) Ferron Sandstone Member of the Mancos Shale, known as the Last Chance delta or Upper Ferron Sandstone. This field guide and the field stops will outline the architecture and depositional sequence stratigraphy of the Upper Ferron Sandstone clastic wedge and explore the stratigraphic positions and compositions of major coal zones. The implications of the architecture and stratigraphy of the Ferron fluvial-deltaic complex for coal and coalbed methane resources will be discussed. Early works suggested that the southwesterly derived deltaic deposits of the the upper Ferron Sandstone clastic wedge were a Type-2 third-order depositional sequence, informally called the Ferron Sequence. These works suggested that the Ferron Sequence is separated by a type-2 sequence boundary from the underlying 3rd-order Hyatti Sequence, which has its sediment source from the northwest. Within the 3rd-order depositional sequence, the deltaic events of the Ferron clastic wedge, recognized as parasequence sets, appear to be stacked into progradational, aggradational, and retrogradational patterns reflecting a generally decreasing sediment supply during an overall slow sea-level rise. The architecture of both near-marine facies and non-marine fluvial facies exhibit well defined trends in response to this decrease in available sediment. Recent studies have concluded that, unless coincident with a depositional sequence boundary, regionally extensive coal zones occur at the tops of the parasequence sets within the Ferron clastic wedge. These coal zones consist of coal seams and their laterally equivalent fissile carbonaceous shales, mudstones, and siltstones, paleosols, and flood plain mudstones. Although the compositions of coal zones vary along depositional dip, the presence of these laterally extensive stratigraphic horizons, above parasequence sets, provides a means of correlating and defining the tops of depositional parasequence sets in both near-marine and non-marine parts of fluvial-deltaic depositional sequences. Ongoing field studies, based on this concept of coal zone stratigraphy, and detailed stratigraphic mapping, have documented the existence of at least 12 parasequence sets within the Last Chance delta clastic wedge. These parasequence sets appear to form four high frequency, 4th-order depositional sequences. The dramatic erosional unconformities, associated with these 4th-order sequence boundaries, indicate that there was up to 20-30 m of erosion, signifying locally substantial base-level drops. These base-level drops were accompanied by a basin ward shift in paleo-shorelines by as much as 5-7 km. These 4th-order Upper Ferron Sequences are superimposed on the 3rd-order sea-level rise event and the 3rd-order, sediment supply/accommodation space driven, stratigraphie architecture of the Upper Ferron Sandstone. The fluvial deltaic architecture shows little response to these 4th-order sea-level events. Coal zones generally thicken landward relative to the mean position of the landward pinch-out of the underlying parasequence set, but after some distance landward, they decrease in thickness. Coal zones also generally thin seaward relative to the mean position of the landward pinch-out of the underlying parasequence set. The coal is thickest in the region between this landward pinch-out and the position of maximum zone thickness. Data indicate that the proportion of coal in the coal zone decreases progressively landward from the landward pinch-out. The effects of differential compaction and differences in original pre-peat swamp topography have the effect of adding perturbations to the general trends. These coal zone systematics have major impact on approaches to exploration and production, and the resource accessment of both coal and coalbed methane.

  12. Accurate Methods for Large Molecular Systems (Preprint)

    DTIC Science & Technology

    2009-01-06

    tensor, EFP calculations are basis set dependent. The smallest recommended basis set is 6- 31++G( d , p )52 The dependence of the computational cost of...and second order perturbation theory (MP2) levels with the 6-31G( d , p ) basis set. Additional SFM tests are presented for a small set of alpha...helices using the 6-31++G( d , p ) basis set. The larger 6-311++G(3df,2p) basis set is employed for creating all EFPs used for non- bonded interactions, since

  13. Development of a coupled level set and immersed boundary method for predicting dam break flows

    NASA Astrophysics Data System (ADS)

    Yu, C. H.; Sheu, Tony W. H.

    2017-12-01

    Dam-break flow over an immersed stationary object is investigated using a coupled level set (LS)/immersed boundary (IB) method developed in Cartesian grids. This approach adopts an improved interface preserving level set method which includes three solution steps and the differential-based interpolation immersed boundary method to treat fluid-fluid and solid-fluid interfaces, respectively. In the first step of this level set method, the level set function ϕ is advected by a pure advection equation. The intermediate step is performed to obtain a new level set value through a new smoothed Heaviside function. In the final solution step, a mass correction term is added to the re-initialization equation to ensure the new level set is a distance function and to conserve the mass bounded by the interface. For accurately calculating the level set value, the four-point upwinding combined compact difference (UCCD) scheme with three-point boundary combined compact difference scheme is applied to approximate the first-order derivative term shown in the level set equation. For the immersed boundary method, application of the artificial momentum forcing term at points in cells consisting of both fluid and solid allows an imposition of velocity condition to account for the presence of solid object. The incompressible Navier-Stokes solutions are calculated using the projection method. Numerical results show that the coupled LS/IB method can not only predict interface accurately but also preserve the mass conservation excellently for the dam-break flow.

  14. Superconvergent second order Cartesian method for solving free boundary problem for invadopodia formation

    NASA Astrophysics Data System (ADS)

    Gallinato, Olivier; Poignard, Clair

    2017-06-01

    In this paper, we present a superconvergent second order Cartesian method to solve a free boundary problem with two harmonic phases coupled through the moving interface. The model recently proposed by the authors and colleagues describes the formation of cell protrusions. The moving interface is described by a level set function and is advected at the velocity given by the gradient of the inner phase. The finite differences method proposed in this paper consists of a new stabilized ghost fluid method and second order discretizations for the Laplace operator with the boundary conditions (Dirichlet, Neumann or Robin conditions). Interestingly, the method to solve the harmonic subproblems is superconvergent on two levels, in the sense that the first and second order derivatives of the numerical solutions are obtained with the second order of accuracy, similarly to the solution itself. We exhibit numerical criteria on the data accuracy to get such properties and numerical simulations corroborate these criteria. In addition to these properties, we propose an appropriate extension of the velocity of the level-set to avoid any loss of consistency, and to obtain the second order of accuracy of the complete free boundary problem. Interestingly, we highlight the transmission of the superconvergent properties for the static subproblems and their preservation by the dynamical scheme. Our method is also well suited for quasistatic Hele-Shaw-like or Muskat-like problems.

  15. The effect of diffuse basis functions on valence bond structural weights

    NASA Astrophysics Data System (ADS)

    Galbraith, John Morrison; James, Andrew M.; Nemes, Coleen T.

    2014-03-01

    Structural weights and bond dissociation energies have been determined for H-F, H-X, and F-X molecules (-X = -OH, -NH2, and -CH3) at the valence bond self-consistent field (VBSCF) and breathing orbital valence bond (BOVB) levels of theory with the aug-cc-pVDZ and 6-31++G(d,p) basis sets. At the BOVB level, the aug-cc-pVDZ basis set yields a counterintuitive ordering of ionic structural weights when the initial heavy atom s-type basis functions are included. For H-F, H-OH, and F-X, the ordering follows chemical intuition when these basis functions are not included. These counterintuitive weights are shown to be a result of the diffuse polarisation function on one VB fragment being spatially located, in part, on the other VB fragment. Except in the case of F-CH3, this problem is corrected with the 6-31++G(d,p) basis set. The initial heavy atom s-type functions are shown to make an important contribution to the VB orbitals and bond dissociation energies and, therefore, should not be excluded. It is recommended to not use diffuse basis sets in valence bond calculations unless absolutely necessary. If diffuse basis sets are needed, the 6-31++G(d,p) basis set should be used with caution and the structural weights checked against VBSCF values which have been shown to follow the expected ordering in all cases.

  16. Using rewards and penalties to obtain desired subject performance

    NASA Technical Reports Server (NTRS)

    Cook, M.; Jex, H. R.; Stein, A. C.; Allen, R. W.

    1981-01-01

    Operant conditioning procedures, specifically the use of negative reinforcement, in achieving stable learning behavior is described. The critical tracking test (CTT) a method of detecting human operator impairment was tested. A pass level is set for each subject, based on that subject's asymptotic skill level while sober. It is critical that complete training take place before the individualized pass level is set in order that the impairment can be detected. The results provide a more general basis for the application of reward/penalty structures in manual control research.

  17. Multi person detection and tracking based on hierarchical level-set method

    NASA Astrophysics Data System (ADS)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  18. Mapping Topographic Structure in White Matter Pathways with Level Set Trees

    PubMed Central

    Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy

    2014-01-01

    Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673

  19. The effects of an educational meeting and subsequent computer reminders on the ordering of laboratory tests by rheumatologists: an interrupted time series analysis.

    PubMed

    Lesuis, Nienke; den Broeder, Nathan; Boers, Nadine; Piek, Ester; Teerenstra, Steven; Hulscher, Marlies; van Vollenhoven, Ronald; den Broeder, Alfons A

    2017-01-01

    To examine the effects of an educational meeting and subsequent computer reminders on the number of ordered laboratory tests. Using interrupted time series analysis we assessed whether trends in the number of laboratory tests ordered by rheumatologists between September 2012 and September 2015 at the Sint Maartenskliniek (the Netherlands) changed following an educational meeting (September 2013) and the introduction of computer reminders into the Computerised Physician Order Entry System (July 2014). The analyses were done for the set of tests on which both interventions had focussed (intervention tests; complement, cryoglobulins, immunoglobins, myeloma protein) and a set of control tests unrelated to the interventions (alanine transferase, anti-cyclic citrullinated peptide, C-reactive protein, creatine, haemoglobin, leukocytes, mean corpuscular volume, rheumatoid factor and thrombocytes). At the start of the study, 101 intervention tests and 7660 control tests were ordered per month by the rheumatologists. After the educational meeting, both the level and trend of ordered intervention and control tests did not change significantly. After implementation of the reminders, the level of ordered intervention tests decreased by 85.0 tests (95%-CI -133.3 to -36.8, p<0.01), the level of control tests did not change following the introduction of reminders. In summary, an educational meeting alone was not effective in decreasing the number of ordered intervention tests, but the combination with computer reminders did result in a large decrease of those tests. Therefore, we recommend using computer reminders in addition to education if reduction of inappropriate test use is aimed for.

  20. A concurrent resolution setting forth the congressional budget for the United States Government for fiscal year 2013 and setting forth the appropriate budgetary levels for fiscal years 2014 through 2022.

    THOMAS, 112th Congress

    Sen. Lee, Mike [R-UT

    2012-07-19

    Senate - 07/19/2012 Placed on Senate Legislative Calendar under General Orders. Calendar No. 462. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  1. A concurrent resolution setting forth the congressional budget for the United States Government for fiscal year 2012 and setting forth the appropriate budgetary levels for fiscal years 2013 through 2021.

    THOMAS, 112th Congress

    Sen. Toomey, Pat [R-PA

    2011-05-19

    Senate - 05/19/2011 Placed on Senate Legislative Calendar under General Orders. Calendar No. 62. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  2. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  3. Ab initio calculation of reaction energies. III. Basis set dependence of relative energies on the FH2 and H2CO potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Frisch, Michael J.; Binkley, J. Stephen; Schaefer, Henry F., III

    1984-08-01

    The relative energies of the stationary points on the FH2 and H2CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Møller-Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H2→FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol-1 of the experimental value using the largest basis set considered. The qualitative features of the H2CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended.

  4. Rank Order Entropy: why one metric is not enough

    PubMed Central

    McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.

    2011-01-01

    The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058

  5. How Do Children Behave Regarding Their Birth Order in Dental Setting?

    PubMed

    Ghaderi, Faezeh; Fijan, Soleiman; Hamedani, Shahram

    2015-12-01

    Prediction of child cooperation level in dental setting is an important issue for a dentist to select the proper behavior management method. Many psychological studies have emphasized the effect of birth order on patient behavior and personality; however, only a few researches evaluated the effect of birth order on child's behavior in dental setting. This study was designed to evaluate the influence of children ordinal position on their behavior in dental setting. A total of 158 children with at least one primary mandibular molar needing class I restoration were selected. Children were classified based on the ordinal position; first, middle, or last child as well as single child. A blinded examiner recorded the pain perception of children during injection based on Visual Analogue Scale (VAS) and Sound, Eye and Movement (SEM) scale. To assess the child's anxiety, the questionnaire known as "Dental Subscale of the Children's Fear Survey Schedule" (CFSS-DS) was employed. The results showed that single children were significantly less cooperative and more anxious than the other children (p<0.001). The middle children were significantly more cooperative in comparison with the other child's position (p< 0.001). Single child may behave less cooperatively in dental setting. The order of child birth must also be considered in prediction of child's behavior for behavioral management.

  6. Critical Skill Sets of Entry-Level IT Professionals: An Empirical Examination of Perceptions from Field Personnel

    ERIC Educational Resources Information Center

    McMurtrey, Mark E.; Downey, James P.; Zeltmann, Steven M.; Friedman, William H.

    2008-01-01

    Understanding the skill sets required of IT personnel is a critical endeavor for both business organizations and academic or training institutions. Companies spend crucial resources training personnel, particularly new IT employees, and educational institutions must know what skills are essential in order to plan an effective curriculum. Rapid…

  7. A Study Comparing Fifth Grade Student Achievement in Mathematics in Departmentalized and Non-Departmentalized Settings

    ERIC Educational Resources Information Center

    Nelson, Karen Ann

    2014-01-01

    The purpose of this quantitative, causal-comparative study was to examine the application of the teaching and learning theory of social constructivism in order to determine if mathematics instruction provided in a departmentalized classroom setting at the fifth grade level resulted in a statistically significant difference in student achievement…

  8. Effect of different simulated altitudes on repeat-sprint performance in team-sport athletes.

    PubMed

    Goods P, S R; Dawson, Brian T; Landers, Grant J; Gore, Christopher J; Peeling, Peter

    2014-09-01

    This study aimed to assess the impact of 3 heights of simulated altitude exposure on repeat-sprint performance in team-sport athletes. Ten trained male team-sport athletes completed 3 sets of repeated sprints (9 × 4 s) on a nonmotorized treadmill at sea level and at simulated altitudes of 2000, 3000, and 4000 m. Participants completed 4 trials in a random order over 4 wk, with mean power output (MPO), peak power output (PPO), blood lactate concentration (Bla), and oxygen saturation (SaO2) recorded after each set. Each increase in simulated altitude corresponded with a significant decrease in SaO2. Total work across all sets was highest at sea level and correspondingly lower at each successive altitude (P < .05; sea level < 2000 m < 3000 m < 4000 m). In the first set, MPO was reduced only at 4000 m, but for subsequent sets, decreases in MPO were observed at all altitudes (P < .05; 2000 m < 3000 m < 4000 m). PPO was maintained in all sets except for set 3 at 4000 m (P < .05; vs sea level and 2000 m). BLa levels were highest at 4000 m and significantly greater (P < .05) than at sea level after all sets. These results suggest that "higher may not be better," as a simulated altitude of 4000 m may potentially blunt absolute training quality. Therefore, it is recommended that a moderate simulated altitude (2000-3000 m) be employed when implementing intermittent hypoxic repeat-sprint training for team-sport athletes.

  9. Order of Presentation of Dimensions Does Not Systematically Bias Utility Weights from a Discrete Choice Experiment.

    PubMed

    Norman, Richard; Kemmler, Georg; Viney, Rosalie; Pickard, A Simon; Gamper, Eva; Holzner, Bernhard; Nerich, Virginie; King, Madeleine

    2016-12-01

    Discrete choice experiments (DCEs) are increasingly used to value aspects of health. An issue with their adoption is that results may be sensitive to the order in which dimensions of health are presented in the valuation task. Findings in the literature regarding order effects are discordant at present. To quantify the magnitude of order effect of quality-of-life (QOL) dimensions within the context of a DCE designed to produce country-specific value sets for the EORTC Quality of Life Utility Measure-Core 10 dimensions (QLU-C10D), a new utility instrument derived from the widely used cancer-specific QOL questionnaire, the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. The DCE comprised 960 choice sets, divided into 60 versions of 16 choice sets, with each respondent assigned to a version. Within each version, the order of QLU-C10D QOL dimensions was randomized, followed by life duration in the last position. The DCE was completed online by 2053 individuals in France and Germany. We analyzed the data with a series of conditional logit models, adjusted for repeated choices within respondent. We used F tests to assess order effects, correcting for multiple hypothesis testing. Each F test failed to reject the null hypothesis of no position effect: 1) all QOL order positions considered jointly; 2) last QOL position only; 3) first QOL position only. Furthermore, the order coefficients were small relative to those of the QLU-C10D QOL dimension levels. The order of presentation of QOL dimensions within a DCE designed to provide utility weights for the QLU-C10D had little effect on level coefficients of those QOL dimensions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Improvements to Level Set, Immersed Boundary methods for Interface Tracking

    NASA Astrophysics Data System (ADS)

    Vogl, Chris; Leveque, Randy

    2014-11-01

    It is not uncommon to find oneself solving a moving boundary problem under flow in the context of some application. Of particular interest is when the moving boundary exerts a curvature-dependent force on the liquid. Such a force arises when observing a boundary that is resistant to bending or has surface tension. Numerically speaking, stable numerical computation of the curvature can be difficult as it is often described in terms of high-order derivatives of either marker particle positions or of a level set function. To address this issue, the level set method is modified to track not only the position of the boundary, but the curvature as well. The definition of the signed-distance function that is used to modify the level set method is also used to develop an interpolation-free, closest-point method. These improvements are used to simulate a bending-resistant, inextensible boundary under shear flow to highlight area and volume conservation, as well as stable curvature calculation. Funded by a NSF MSPRF grant.

  11. Application of the order-of-magnitude analysis to a fourth-order RANS closure for simulating a 2D boundary layer

    NASA Astrophysics Data System (ADS)

    Poroseva, Svetlana V.

    2013-11-01

    Simulations of turbulent boundary-layer flows are usually conducted using a set of the simplified Reynolds-Averaged Navier-Stokes (RANS) equations obtained by order-of-magnitude analysis (OMA) of the original RANS equations. The resultant equations for the mean-velocity components are closed using the Boussinesq approximation for the Reynolds stresses. In this study OMA is applied to the fourth-order RANS (FORANS) set of equations. The FORANS equations are chosen as they can be closed on the level of the 5th-order correlations without using unknown model coefficients, i.e. no turbulent diffusion modeling is required. New models for the 2nd-, 3rd- and 4th-order velocity-pressure gradient correlations are derived for the current FORANS equations. This set of FORANS equations and models are analyzed for the case of two-dimensional mean flow. The equations include familiar transport terms for the mean-velocity components along with algebraic expressions for velocity correlations of different orders specific to the FORANS approach. Flat plate DNS data (Spalart, 1988) are used to verify these expressions and the areas of the OMA applicability within the boundary layer. The material is based upon work supported by NASA under award NNX12AJ61A.

  12. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu

    2017-11-01

    This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.

  13. 7 CFR 1599.4 - Application process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the activities in order to foster local capacity building and leadership; (6) A budget that details... statistics on poverty levels, food deficits, literacy rates, and any other required items set forth on the...

  14. Peer pressure and public reporting within healthcare setting: improving accountability and health care quality in hospitals.

    PubMed

    Specchia, Maria Lucia; Veneziano, Maria Assunta; Cadeddu, Chiara; Ferriero, Anna Maria; Capizzi, Silvio; Ricciardi, Walter

    2012-01-01

    In the last few years, the need of public reporting of health outcomes has acquired a great importance. The public release of performance results could be a tool for improving health care quality and many attempts have been made in order to introduce public reporting programs within the health care context at different levels. It would be necessary to promote the introduction of a standardized set of outcome and performance measures in order to improve quality of health care services and to make health care providers aware of the importance of transparency and accountability.

  15. Integrating Compact Constraint and Distance Regularization with Level Set for Hepatocellular Carcinoma (HCC) Segmentation on Computed Tomography (CT) Images

    NASA Astrophysics Data System (ADS)

    Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping

    2017-01-01

    This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.

  16. A new medical image segmentation model based on fractional order differentiation and level set

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Huang, Shan; Xie, Feifei; Li, Lihong; Chen, Wensheng; Liang, Zhengrong

    2018-03-01

    Segmenting medical images is still a challenging task for both traditional local and global methods because the image intensity inhomogeneous. In this paper, two contributions are made: (i) on the one hand, a new hybrid model is proposed for medical image segmentation, which is built based on fractional order differentiation, level set description and curve evolution; and (ii) on the other hand, three popular definitions of Fourier-domain, Grünwald-Letnikov (G-L) and Riemann-Liouville (R-L) fractional order differentiation are investigated and compared through experimental results. Because of the merits of enhancing high frequency features of images and preserving low frequency features of images in a nonlinear manner by the fractional order differentiation definitions, one fractional order differentiation definition is used in our hybrid model to perform segmentation of inhomogeneous images. The proposed hybrid model also integrates fractional order differentiation, fractional order gradient magnitude and difference image information. The widely-used dice similarity coefficient metric is employed to evaluate quantitatively the segmentation results. Firstly, experimental results demonstrated that a slight difference exists among the three expressions of Fourier-domain, G-L, RL fractional order differentiation. This outcome supports our selection of one of the three definitions in our hybrid model. Secondly, further experiments were performed for comparison between our hybrid segmentation model and other existing segmentation models. A noticeable gain was seen by our hybrid model in segmenting intensity inhomogeneous images.

  17. Ultrasonographic evaluation of equine tendons: a quantitative in vitro study of the effects of amplifier gain level, transducer-tilt, and transducer-displacement.

    PubMed

    van Schie, J T; Bakker, E M; van Weeren, P R

    1999-01-01

    The objective of the in vitro experiments described in this paper was to quantify the effects of some instrumental variables on the quantitative evaluation, by means of first-order gray-level statistics, of ultrasonographic images of equine tendons. The experiments were done on three isolated equine superficial digital flexor tendons that were mounted in a frame and submerged in a waterbath. Sections with either normal tendon tissue, an acute lesion, or a chronic scar, were selected. In these sections, the following experiments were done: 1) a gradual increase of total amplifier gain output subdivided in 12 equal steps; 2) a transducer tilt plus or minus 3 degrees from perpendicular, with steps of 1 degree; and 3) a transducer displacement along, and perpendicular to, the tendon long axis, with 16 steps of 0.25 mm each. Transverse ultrasonographic images were collected, and in the regions of interest (ROI) first-order gray-level statistics were calculated to quantify the effects of each experiment. Some important observations were: 1) the total amplifier gain output has a substantial influence on the ultrasonographic image; for example, in the case of an acute lesion, a low gain setting results in an almost completely black image; whereas, with higher gain settings, a marked "filling in" effect on the lesion can be observed; 2) the relative effects of the tilting of the transducer are substantial in normal tendon tissue (18%) and chronic scar (12%); whereas, in the event of an acute lesion, the effects on the mean gray level are dramatic (40%); and 3) the relative effects of displacement of the transducer are small in normal tendon tissue, but on the other hand, the mean gray-level changes 7% in chronic scar, and even 20% in an acute lesion. In general, slight variations in scanner settings and transducer handling can have considerable effects on the gray levels of the ultrasonographic image. Furthermore, there is a strong indication that this quantitative method, as far as based exclusively on the first-order gray-level statistics, may be not discriminative enough to accurately assess the integrity of the tendon. Therefore, the value of a quantitative evaluation of the first-order gray-level statistics for the assessment of the integrity of the equine tendon is questionable.

  18. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  19. Visualizing the Future: Technology Competency Development in Clinical Medicine, and Implications for Medical Education

    ERIC Educational Resources Information Center

    Srinivasan, Malathi; Keenan, Craig R.; Yager, Joel

    2006-01-01

    Objective: In this article, the authors ask three questions. First, what will physicians need to know in order to be effective in the future? Second, what role will technology play in achieving that high level of effectiveness? Third, what specific skill sets will physicians need to master in order to become effective? Method: Through three case…

  20. How Do Children Behave Regarding Their Birth Order in Dental Setting?

    PubMed Central

    Ghaderi, Faezeh; Fijan, Soleiman; Hamedani, Shahram

    2015-01-01

    Statement of the Problem Prediction of child cooperation level in dental setting is an important issue for a dentist to select the proper behavior management method. Many psychological studies have emphasized the effect of birth order on patient behavior and personality; however, only a few researches evaluated the effect of birth order on child’s behavior in dental setting. Purpose This study was designed to evaluate the influence of children ordinal position on their behavior in dental setting. Materials and Method A total of 158 children with at least one primary mandibular molar needing class I restoration were selected. Children were classified based on the ordinal position; first, middle, or last child as well as single child. A blinded examiner recorded the pain perception of children during injection based on Visual Analogue Scale (VAS) and Sound, Eye and Movement (SEM) scale. To assess the child's anxiety, the questionnaire known as “Dental Subscale of the Children's Fear Survey Schedule” (CFSS-DS) was employed. Results The results showed that single children were significantly less cooperative and more anxious than the other children (p<0.001). The middle children were significantly more cooperative in comparison with the other child's position (p< 0.001). Conclusion Single child may behave less cooperatively in dental setting. The order of child birth must also be considered in prediction of child’s behavior for behavioral management. PMID:26636121

  1. [The Danish debate on priority setting in medicine - characteristics and results].

    PubMed

    Pornak, S; Meyer, T; Raspe, H

    2011-10-01

    Priority setting in medicine helps to achieve a fair and transparent distribution of health-care resources. The German discussion about priority setting is still in its infancy and may benefit from other countries' experiences. This paper aims to analyse the Danish priority setting debate in order to stimulate the German discussion. The methods used are a literature analysis and a document analysis as well as expert interviews. The Danish debate about priority setting in medicine began in the 1970s, when a government committee was constituted to evaluate health-care priorities at the national level. In the 1980s a broader debate arose in politics, ethics, medicine and health economy. The discussions reached a climax in the 1990s, when many local activities - always involving the public - were initiated. Some Danish counties tried to implement priority setting in the daily routine of health care. The Council of Ethics was a major player in the debate of the 1990s and published a detailed statement on priority setting in 1996. With the new century the debate about priority setting seemed to have come to an end, but in 2006 the Technology Council and the Danish Regions resumed the discussion. In 2009 the Medical Association called for a broad debate in order to achieve equity among all patients. The long lasting Danish debate on priority setting has entailed only very little practical consequences on health care. The main problems seem to have been the missing effort to bundle the various local initiatives on a national level and the lack of powerful players to put results of the discussion into practice. Nevertheless, today the attitude towards priority setting is predominantly positive and even politicians talk freely about it. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.

    PubMed

    Ciofolo, Cybèle; Barillot, Christian

    2009-06-01

    We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.

  3. Small Engine Technology (SET) Task 24 Business and Regional Aircraft System Studies

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth

    2003-01-01

    This final report has been prepared by Honeywell Engines & Systems, Phoenix, Arizona, a unit of Honeywell International Inc., documenting work performed during the period June 1999 through December 1999 for the National Aeronautics and Space Administration (NASA) Glenn Research Center, Cleveland, Ohio, under the Small Engine Technology (SET) Program, Contract No. NAS3-27483, Task Order 24, Business and Regional Aircraft System Studies. The work performed under SET Task 24 consisted of evaluating the noise reduction benefits compared to the baseline noise levels of representative 1992 technology aircraft, obtained by applying different combinations of noise reduction technologies to five business and regional aircraft configurations. This report focuses on the selection of the aircraft configurations and noise reduction technologies, the prediction of noise levels for those aircraft, and the comparison of the noise levels with those of the baseline aircraft.

  4. State-dependent impulsive models of integrated pest management (IPM) strategies and their dynamic consequences.

    PubMed

    Tang, Sanyi; Cheke, Robert A

    2005-03-01

    A state-dependent impulsive model is proposed for integrated pest management (IPM). IPM involves combining biological, mechanical, and chemical tactics to reduce pest numbers to tolerable levels after a pest population has reached its economic threshold (ET). The complete expression of an orbitally asymptotically stable periodic solution to the model with a maximum value no larger than the given ET is presented, the existence of which implies that pests can be controlled at or below their ET levels. We also prove that there is no periodic solution with order larger than or equal to three, except for one special case, by using the properties of the LambertW function and Poincare map. Moreover, we show that the existence of an order two periodic solution implies the existence of an order one periodic solution. Various positive invariant sets and attractors of this impulsive semi-dynamical system are described and discussed. In particular, several horseshoe-like attractors, whose interiors can simultaneously contain stable order 1 periodic solutions and order 2 periodic solutions, are found and the interior structure of the horseshoe-like attractors is discussed. Finally, the largest invariant set and the sufficient conditions which guarantee the global orbital and asymptotic stability of the order 1 periodic solution in the meaningful domain for the system are given using the Lyapunov function. Our results show that, in theory, a pest can be controlled such that its population size is no larger than its ET by applying effects impulsively once, twice, or at most, a finite number of times, or according to a periodic regime. Moreover, our theoretical work suggests how IPM strategies could be used to alter the levels of the ET in the farmers' favour.

  5. A Review of Lightweight Thread Approaches for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Adrian; Pena, Antonio J.; Seo, Sangmin

    High-level, directive-based solutions are becoming the programming models (PMs) of the multi/many-core architectures. Several solutions relying on operating system (OS) threads perfectly work with a moderate number of cores. However, exascale systems will spawn hundreds of thousands of threads in order to exploit their massive parallel architectures and thus conventional OS threads are too heavy for that purpose. Several lightweight thread (LWT) libraries have recently appeared offering lighter mechanisms to tackle massive concurrency. In order to examine the suitability of LWTs in high-level runtimes, we develop a set of microbenchmarks consisting of commonlyfound patterns in current parallel codes. Moreover, wemore » study the semantics offered by some LWT libraries in order to expose the similarities between different LWT application programming interfaces. This study reveals that a reduced set of LWT functions can be sufficient to cover the common parallel code patterns and that those LWT libraries perform better than OS threads-based solutions in cases where task and nested parallelism are becoming more popular with new architectures.« less

  6. Spherical Harmonics Analysis of the ECMWF Global Wind Fields at the 10-Meter Height Level During 1985: A Collection of Figures Illustrating Results

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Nishihama, Masahiro

    1997-01-01

    Half-daily global wind speeds in the east-west (u) and north-south (v) directions at the 10-meter height level were obtained from the European Centre for Medium Range Weather Forecasts (ECMWF) data set of global analyses. The data set covered the period 1985 January to 1995 January. A spherical harmonic expansion to degree and order 50 was used to perform harmonic analysis of the east-west (u) and north-south (v) velocity field components. The resulting wind field is displayed, as well as the residual of the fit, at a particular time. The contribution of particular coefficients is shown. The time variability of the coefficients up to degree and order 3 is presented. Corresponding power spectrum plots are given. Time series analyses were applied also to the power associated with degrees 0-10; the results are included.

  7. Separation of Benign and Malicious Network Events for Accurate Malware Family Classification

    DTIC Science & Technology

    2015-09-28

    use Kullback - Leibler (KL) divergence [15] to measure the information ...related work in an important aspect concerning the order of events. We use n-grams to capture the order of events, which exposes richer information about...DISCUSSION Using n-grams on higher level network events helps under- stand the underlying operation of the malware, and provides a good feature set

  8. Establishing the budget for the United States Government for fiscal year 2014 and setting forth appropriate budgetary levels for fiscal years 2015 through 2023.

    THOMAS, 113th Congress

    Rep. Ryan, Paul [R-WI-1

    2013-03-15

    Senate - 10/16/2013 Ordered held at desk by unanimous consent. Pursuant to the order of 10/16/2013. (All Actions) Notes: Provisions of this budget resolution were included in H.J.RES.59. Tracker: This bill has the status Resolving DifferencesHere are the steps for Status of Legislation:

  9. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    NASA Astrophysics Data System (ADS)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  10. The outcome of trauma patients with do-not-resuscitate orders.

    PubMed

    Matsushima, Kazuhide; Schaefer, Eric W; Won, Eugene J; Armen, Scott B

    2016-02-01

    Institutional variation in outcome of patients with do-not-resuscitate (DNR) orders has not been well described in the setting of trauma. The purpose of this study was to assess the impact of trauma center designation on outcome of patients with DNR orders. A statewide trauma database (Pennsylvania Trauma Outcome Study) was used for the analysis. Characteristics of patients with DNR orders were compared between state-designated level 1 and 2 trauma centers. Inhospital mortality and major complication rates were compared using hierarchical logistic regression models that included a random effect for trauma centers. We adjusted for a number of potential confounders and allowed for nonlinearity in injury severity score and age in these models. A total of 106,291 patients (14 level 1 and 11 level 2 trauma centers) were identified in the Pennsylvania Trauma Outcome Study database between 2007 and 2011. We included 5953 patients with DNR orders (5.6%). Although more severely injured patients with comorbid disease were made DNR in level 1 trauma centers, trauma center designation level was not a significant factor for inhospital mortality of patients with DNR orders (odds ratio, 1.33; 95% confidence interval, 0.81-2.18; P = 0.26). Level 1 trauma centers were significantly associated with a higher rate of major complications (odds ratio, 1.75; 95% confidence interval, 1.11-2.75; P = 0.016). Inhospital mortality of patients with DNR orders was not significantly associated with trauma designation level after adjusting for case mix. More aggressive treatment or other unknown factors may have resulted in a significantly higher complication rate at level 1 trauma centers. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  12. Using the level set method in slab detachment modeling

    NASA Astrophysics Data System (ADS)

    Hillebrand, B.; Geenen, T.; Spakman, W.; van den Berg, A. P.

    2012-04-01

    Slab detachment plays an important role in the dynamics of several regions in the world such as the Mediterranean-Carpathian region and the Anatolia-Aegean Region. It is therefore important to gain better insights in the various aspects of this process by further modeling of this phenomenon. In this study we model slab detachment using a visco-plastic composite rheology consisting of diffusion, dislocation and Peierls creep. In order to gain more control over this visco-plastic composite rheology, as well as some deterministic advantages, the models presented in this study make use of the level set method (Osher and Sethian J. Comp. Phys., 1988). The level set method is a computational method to track interfaces. It works by creating a signed distance function which is zero at the interface of interest which is then advected by the flow field. This does not only allow one to track the interface but also to determine on which side of the interface a certain point is located since the level set function is determined in the entire domain and not just on the interface. The level set method is used in a wide variety of scientific fields including geophysics. In this study we use the level set method to keep track of the interface between the slab and the mantle. This allows us to determine more precisely the moment and depth of slab detachment. It also allows us to clearly distinguish the mantle from the slab and have therefore more control over their different rheologies. We focus on the role of Peierls creep in the slab detachment process and on the use of the level set method in modeling this process.

  13. Performance assessment for the disposal of low-level waste in the 200 West Area Burial Grounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M.I.; Khaleel, R.; Rittmann, P.D.

    1995-06-01

    This document reports the findings of a performance assessment (PA) analysis for the disposal of solid low-level radioactive waste (LLW) in the 200 West Area Low-Level Waste Burial Grounds (LLBG) in the northwest corner of the 200 West Area of the Hanford Site. This PA analysis is required by US Department of Energy (DOE) Order 5820.2A (DOE 1988a) to demonstrate that a given disposal practice is in compliance with a set of performance objectives quantified in the order. These performance objectives are applicable to the disposal of DOE-generated LLW at any DOE-operated site after the finalization of the order inmore » September 1988. At the Hanford Site, DOE, Richland Operations Office (RL) has issued a site-specific supplement to DOE Order 5820.2A, DOE-RL 5820.2A (DOE 1993), which provides additiona I ce objectives that must be satisfied.« less

  14. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  15. 78 FR 56249 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-12

    ... executing certain trades that either add liquidity to or remove liquidity from the Exchange's order book in... adopted its fees, it set its fee levels appropriate to the start-up nature of the Exchange's new equities... to perform at a high level of responsiveness and efficiency. \\5\\ Securities Exchange Act Release No...

  16. Terminology modeling for an enterprise laboratory orders catalog.

    PubMed

    Zhou, Li; Goldberg, Howard; Pabbathi, Deepika; Wright, Adam; Goldman, Debora S; Van Putten, Cheryl; Barley, Amanda; Rocha, Roberto A

    2009-11-14

    Laboratory test orders are used in a variety of clinical information systems at Partners HealthCare. At present, each site at Partners manages its own set of laboratory orders with locally defined codes. Our current plan is to implement an enterprise catalog, where laboratory test orders are mapped to reference terminologies and codes from different sites are mapped to each other. This paper describes the terminology modeling effort that preceded the implementation of the enterprise laboratory orders catalog. In particular, we present our experience in adapting HL7's "Common Terminology Services 2 - Upper Level Class Model" as a terminology metamodel for guiding the development of fully specified laboratory orders and related services.

  17. Terminology Modeling for an Enterprise Laboratory Orders Catalog

    PubMed Central

    Zhou, Li; Goldberg, Howard; Pabbathi, Deepika; Wright, Adam; Goldman, Debora S.; Van Putten, Cheryl; Barley, Amanda; Rocha, Roberto A.

    2009-01-01

    Laboratory test orders are used in a variety of clinical information systems at Partners HealthCare. At present, each site at Partners manages its own set of laboratory orders with locally defined codes. Our current plan is to implement an enterprise catalog, where laboratory test orders are mapped to reference terminologies and codes from different sites are mapped to each other. This paper describes the terminology modeling effort that preceded the implementation of the enterprise laboratory orders catalog. In particular, we present our experience in adapting HL7’s “Common Terminology Services 2 – Upper Level Class Model” as a terminology metamodel for guiding the development of fully specified laboratory orders and related services. PMID:20351950

  18. Using lean methodology to improve efficiency of electronic order set maintenance in the hospital.

    PubMed

    Idemoto, Lori; Williams, Barbara; Blackmore, Craig

    2016-01-01

    Order sets, a series of orders focused around a diagnosis, condition, or treatment, can reinforce best practice, help eliminate outdated practice, and provide clinical guidance. However, order sets require regular updates as evidence and care processes change. We undertook a quality improvement intervention applying lean methodology to create a systematic process for order set review and maintenance. Root cause analysis revealed challenges with unclear prioritization of requests, lack of coordination between teams, and lack of communication between producers and requestors of order sets. In March of 2014, we implemented a systematic, cyclical order set review process, with a set schedule, defined responsibilities for various stakeholders, formal meetings and communication between stakeholders, and transparency of the process. We first identified and deactivated 89 order sets which were infrequently used. Between March and August 2014, 142 order sets went through the new review process. Processing time for the build duration of order sets decreased from a mean of 79.6 to 43.2 days (p<.001, CI=22.1, 50.7). Applying Lean production principles to the order set review process resulted in significant improvement in processing time and increased quality of orders. As use of order sets and other forms of clinical decision support increase, regular evidence and process updates become more critical.

  19. dParFit: A computer program for fitting diatomic molecule spectral data to parameterized level energy expressions

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.

    2017-01-01

    This paper describes FORTRAN program dParFit, which performs least-squares fits of diatomic molecule spectroscopic data involving one or more electronic states and one or more isotopologues, to parameterized expressions for the level energies. The data may consist of any combination of microwave, infrared or electronic vibrotational bands, fluorescence series or binding energies (from photo-association spectroscopy). The level energies for each electronic state may be described by one of: (i) band constants {Gv ,Bv ,Dv , … } for each vibrational level, (ii) generalized Dunham expansions, (iii) pure near-dissociation expansions (NDEs), (iv) mixed Dunham/NDE expressions, or (v) individual term values for each distinct level of each isotopologue. Different representations may be used for different electronic states and/or for different types of constants in a given fit (e.g., Gv and Bv may be represented one way and centrifugal distortion constants another). The effect of Λ-doubling or 2Σ splittings may be represented either by band constants (qvB or γvB, qvD or γvD, etc.) for each vibrational level of each isotopologue, or by using power series expansions in (v + 1/2) to represent those constants. Fits to Dunham or NDE expressions automatically incorporate normal first-order semiclassical mass scaling to allow combined analyses of multi-isotopologue data. In addition, dParFit may fit to determine atomic-mass-dependent terms required to account for breakdown of the Born-Oppenheimer and first-order semiclassical approximations. In any of these types of fits, one or more subsets of these parameters for one or more of the electronic states may be held fixed, while a limited parameter set is varied. The program can also use a set of read-in constants to make predictions and calculate deviations [ycalc -yobs ] for any chosen input data set, or to generate predictions of arbitrary data sets.

  20. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  1. Ordering of the O-O stretching vibrational frequencies in ozone

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III

    1989-01-01

    The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.

  2. Your Place or Mine? Evaluating the Perspectives of the Practical Legal Training Work Experience Placement through the Eyes of the Supervisors and the Students

    ERIC Educational Resources Information Center

    Spencer, Rachel

    2007-01-01

    In order to qualify as a lawyer in Australia, each law graduate must complete a recognised practical qualification. In 2002, the Australasian Professional Legal Education Council (APLEC) published a recommended set of competency standards which all entry level lawyers should meet in order to be eligible to be admitted as a legal practitioner. Upon…

  3. Accurate segmentation framework for the left ventricle wall from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.

    2013-10-01

    We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.

  4. Obsessive, compulsive, and conscientious? The relationship between OCPD and personality traits.

    PubMed

    Mike, Anissa; King, Hannah; Oltmanns, Thomas F; Jackson, Joshua J

    2017-12-22

    Obsessive-compulsive personality disorder (OCPD) is defined as being overly controlling, rigid, orderly, and perfectionistic. At a definitional level, OCPD would appear to be highly related to the trait of Conscientiousness. The current study attempts to disentangle this relationship by examining the relationship at a facet level using multiple forms of OCPD assessment and using multiple reports of OCPD and personality. In addition, the relationship between OCPD and each Big Five trait was examined. The study relied on a sample of 1,630 adults who completed self-reports of personality and OCPD. Informants and interviewers also completed reports on the targets. Bifactor models were constructed in order to disentangle variance attributable to each facet and its general factors. Across four sets of analyses, individuals who scored higher on OCPD tended to be more orderly and achievement striving, and more set in their ways, but less generally conscientious. OCPD was also related to select facets under each Big Five trait. Notably, findings indicated that OCPD has a strong interpersonal component and that OCPD tendencies may interfere with one's relationships with others. Findings suggest that OCPD's relationship with personality can be more precisely explained through its relationships with specific tendencies rather than general, higher-order traits. © 2017 Wiley Periodicals, Inc.

  5. Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.

    PubMed

    Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland

    2009-06-09

    The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.

  6. Optimizing Artillery Fires at the Brigade Level

    DTIC Science & Technology

    2017-06-09

    order to determine changes needed to meet the demands placed upon units based on their mission set. The structured progression of increased readiness in...Brigade Combat Team (BCT). Through the construct of Doctrine, Organizational , Training, Material, Leadership and Education, Personnel, and Facilities...

  7. A supplier selection and order allocation problem with stochastic demands

    NASA Astrophysics Data System (ADS)

    Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua

    2011-08-01

    We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.

  8. Use of Order Sets in Inpatient Computerized Provider Order Entry Systems: A Comparative Analysis of Usage Patterns at Seven Sites

    PubMed Central

    Wright, Adam; Feblowitz, Joshua C.; Pang, Justine E.; Carpenter, James D.; Krall, Michael A.; Middleton, Blackford; Sittig, Dean F.

    2012-01-01

    Background Many computerized provider order entry (CPOE) systems include the ability to create electronic order sets: collections of clinically-related orders grouped by purpose. Order sets promise to make CPOE systems more efficient, improve care quality and increase adherence to evidence-based guidelines. However, the development and implementation of order sets can be expensive and time-consuming and limited literature exists about their utilization. Methods Based on analysis of order set usage logs from a diverse purposive sample of seven sites with commercially- and internally-developed inpatient CPOE systems, we developed an original order set classification system. Order sets were categorized across seven non-mutually exclusive axes: admission/discharge/transfer (ADT), perioperative, condition-specific, task-specific, service-specific, convenience, and personal. In addition, 731 unique subtypes were identified within five axes: four in ADT (S=4), three in perioperative, 144 in condition-specific, 513 in task-specific, and 67 in service-specific. Results Order sets (n=1,914) were used a total of 676,142 times at the participating sites during a one-year period. ADT and perioperative order sets accounted for 27.6% and 24.2% of usage respectively. Peripartum/labor, chest pain/Acute Coronary Syndrome/Myocardial Infarction and diabetes order sets accounted for 51.6% of condition-specific usage. Insulin, angiography/angioplasty and arthroplasty order sets accounted for 19.4% of task-specific usage. Emergency/trauma, Obstetrics/Gynecology/Labor Delivery and anesthesia accounted for 32.4% of service-specific usage. Overall, the top 20% of order sets accounted for 90.1% of all usage. Additional salient patterns are identified and described. Conclusion We observed recurrent patterns in order set usage across multiple sites as well as meaningful variations between sites. Vendors and institutional developers should identify high-value order set types through concrete data analysis in order to optimize the resources devoted to development and implementation. PMID:22819199

  9. The development of a contract quality assurance program within the Virginia Department of Highways.

    DOT National Transportation Integrated Search

    1989-01-01

    In order to assure the quality of construction products and processes, the Virginia Department of Transportation has established three levels of construction control. First, contractors themselves provide oversight and quality control as set out in t...

  10. An efficient mass-preserving interface-correction level set/ghost fluid method for droplet suspensions under depletion forces

    NASA Astrophysics Data System (ADS)

    Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca

    2018-01-01

    Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.

  11. The torsional energy profile of 1,2-diphenylethane: an ab initio study

    NASA Astrophysics Data System (ADS)

    Ivanov, Petko M.

    1997-08-01

    Ab initio molecular orbital calculations were carried out for the antiperiplanar (ap), the synclinal (sc), phenyl/phenyl eclipsed (syn barrier), and phenyl/H eclipsed (ap/sc barrier) conformations of 1,2-diphenylethane, and the energy ordering of conformations thus obtained was compared with the torsional energy profile estimated with the MM2 and MM3 molecular mechanics force fields. The basis set effect on the results was studied at the restricted Hartree-Fock (RHF) self-consistent field (SCF) level of theory, and the electron correlation energies were corrected by the second-order (MP2) Møller-Plesset perturbation treatment using the 6-31G * basis set. The performance of a DFT model (Becke-style three-parameter hybrid method using the correlation functional of Lee, Yang and Parr, B3LYP) was also tested to assess relative energies of the conformations using two basis sets, 6-31G * and 6-311G **. The RHF and B3LYP results are qualitatively the same, while the MP2 calculations produced significant differences in the geometries and reversed the order of preference for the antiperiplanar and the synclinal conformations.

  12. A fuzzy multi-objective model for capacity allocation and pricing policy of provider in data communication service with different QoS levels

    NASA Astrophysics Data System (ADS)

    Pan, Wei; Wang, Xianjia; Zhong, Yong-guang; Yu, Lean; Jie, Cao; Ran, Lun; Qiao, Han; Wang, Shouyang; Xu, Xianhao

    2012-06-01

    Data communication service has an important influence on e-commerce. The key challenge for the users is, ultimately, to select a suitable provider. However, in this article, we do not focus on this aspect but the viewpoint and decision-making of providers for order allocation and pricing policy when orders exceed service capacity. It is a multiple criteria decision-making problem such as profit and cancellation ratio. Meanwhile, we know realistic situations in which much of the input information is uncertain. Thus, it becomes very complex in a real-life environment. In this situation, fuzzy sets theory is the best tool for solving this problem. Our fuzzy model is formulated in such a way as to simultaneously consider the imprecision of information, price sensitive demand, stochastic variables, cancellation fee and the general membership function. For solving the problem, a new fuzzy programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the suitable order set and pricing policy of provider in data communication service with different quality of service (QoS) levels.

  13. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  14. High-level ab initio enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan.

    PubMed

    Feller, David; Simmie, John M

    2012-11-29

    A high-level ab initio thermochemical technique, known as the Feller-Petersen-Dixon method, is used to calculate the total atomization energies and hence the enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan itself as a means of rationalizing significant discrepancies in the literature. In order to avoid extremely large standard coupled cluster theory calculations, the explicitly correlated CCSD(T)-F12b variation was used with basis sets up to cc-pVQZ-F12. After extrapolating to the complete basis set limit and applying corrections for core/valence, scalar relativistic, and higher order effects, the final Δ(f)H° (298.15 K) values, with the available experimental values in parentheses are furan -34.8 ± 3 (-34.7 ± 0.8), 2-methylfuran -80.3 ± 5 (-76.4 ± 1.2), and 2,5-dimethylfuran -124.6 ± 6 (-128.1 ± 1.1) kJ mol(-1). The theoretical results exhibit a compelling internal consistency.

  15. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    PubMed

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  16. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  17. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE PAGES

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    2017-07-31

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  18. Prioritizing and optimizing sustainable measures for food waste prevention and management.

    PubMed

    Cristóbal, Jorge; Castellani, Valentina; Manfredi, Simone; Sala, Serenella

    2018-02-01

    Food waste has gained prominence in the European political debate thanks to the recent Circular Economy package. Currently the waste hierarchy, introduced by the Waste Framework Directive, has been the rule followed to prioritize food waste prevention and management measures according to the environmental criteria. But when considering other criteria along with the environmental one, such as the economic, other tools are needed for the prioritization and optimization. This paper addresses the situation in which a decision-maker has to design a food waste prevention programme considering the limited economic resources in order to achieve the highest environmental impact prevention along the whole food life cycle. A methodology using Life Cycle Assessment and mathematical programing is proposed and its capabilities are shown through a case study. Results show that the order established in the waste hierarchy is generally followed. The proposed methodology revealed to be especially helpful in identifying "quick wins" - measures that should be always prioritized since they avoid a high environmental impact at a low cost. Besides, in order to aggregate the environmental scores related to a variety of impact categories, different weighting sets were proposed. In general, results show that the relevance of the weighting set in the prioritization of the measures appears to be limited. Finally, the correlation between reducing food waste generation and reducing environmental impact along the Food Supply Chain has been studied. Results highlight that when planning food waste prevention strategies, it is important to set the targets at the level of environmental impact instead of setting the targets at the level of avoided food waste generation (in mass). Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. An assessment of priority setting process and its implication on availability of emergency obstetric care services in Malindi District, Kenya.

    PubMed

    Nyandieka, Lilian Nyamusi; Kombe, Yeri; Ng'ang'a, Zipporah; Byskov, Jens; Njeru, Mercy Karimi

    2015-01-01

    In spite of the critical role of Emergency Obstetric Care in treating complications arising from pregnancy and childbirth, very few facilities are equipped in Kenya to offer this service. In Malindi, availability of EmOC services does not meet the UN recommended levels of at least one comprehensive and four basic EmOC facilities per 500,000 populations. This study was conducted to assess priority setting process and its implication on availability, access and use of EmOC services at the district level. A qualitative study was conducted both at health facility and community levels. Triangulation of data sources and methods was employed, where document reviews, in-depth interviews and focus group discussions were conducted with health personnel, facility committee members, stakeholders who offer and/ or support maternal health services and programmes; and the community members as end users. Data was thematically analysed. Limitations in the extent to which priorities in regard to maternal health services can be set at the district level were observed. The priority setting process was greatly restricted by guidelines and limited resources from the national level. Relevant stakeholders including community members are not involved in the priority setting process, thereby denying them the opportunity to contribute in the process. The findings illuminate that consideration of all local plans in national planning and budgeting as well as the involvement of all relevant stakeholders in the priority setting exercise is essential in order to achieve a consensus on the provision of emergency obstetric care services among other health service priorities.

  20. Benchmarking Hydrogen and Carbon NMR Chemical Shifts at HF, DFT, and MP2 Levels.

    PubMed

    Flaig, Denis; Maurer, Marina; Hanni, Matti; Braunger, Katharina; Kick, Leonhard; Thubauville, Matthias; Ochsenfeld, Christian

    2014-02-11

    An extensive study of error distributions for calculating hydrogen and carbon NMR chemical shifts at Hartree-Fock (HF), density functional theory (DFT), and Møller-Plesset second-order perturbation theory (MP2) levels is presented. Our investigation employs accurate CCSD(T)/cc-pVQZ calculations for providing reference data for 48 hydrogen and 40 carbon nuclei within an extended set of chemical compounds covering a broad range of the NMR scale with high relevance to chemical applications, especially in organic chemistry. Besides the approximations of HF, a variety of DFT functionals, and conventional MP2, we also present results with respect to a spin component-scaled MP2 (GIAO-SCS-MP2) approach. For each method, the accuracy is analyzed in detail for various basis sets, allowing identification of efficient combinations of method and basis set approximations.

  1. Numerical Simulation of Hydrodynamics of a Heavy Liquid Drop Covered by Vapor Film in a Water Pool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, W.M.; Yang, Z.L.; Giri, A.

    2002-07-01

    A numerical study on the hydrodynamics of a droplet covered by vapor film in water pool is carried out. Two level set functions are used as to implicitly capture the interfaces among three immiscible fluids (melt-drop, vapor and coolant). This approach leaves only one set of conservation equations for the three phases. A high-order Navier-Stokes solver, called Cubic-Interpolated Pseudo-Particle (CIP) algorithm, is employed in combination with level set approach, which allows large density ratios (up to 1000), surface tension and jump in viscosity. By this calculation, the hydrodynamic behavior of a melt droplet falling into a volatile coolant is simulated,more » which is of great significance to reveal the mechanism of steam explosion during a hypothetical severe reactor accident. (authors)« less

  2. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardisty, M.; Gordon, L.; Agarwal, P.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less

  3. Roles of chromatin insulator proteins in higher-order chromatin organization and transcription regulation

    PubMed Central

    Vogelmann, Jutta; Valeri, Alessandro; Guillou, Emmanuelle; Cuvier, Olivier; Nollmann, Marcelo

    2013-01-01

    Eukaryotic chromosomes are condensed into several hierarchical levels of complexity: DNA is wrapped around core histones to form nucleosomes, nucleosomes form a higher-order structure called chromatin, and chromatin is subsequently compartmentalized in part by the combination of multiple specific or unspecific long-range contacts. The conformation of chromatin at these three levels greatly influences DNA metabolism and transcription. One class of chromatin regulatory proteins called insulator factors may organize chromatin both locally, by setting up barriers between heterochromatin and euchromatin, and globally by establishing platforms for long-range interactions. Here, we review recent data revealing a global role of insulator proteins in the regulation of transcription through the formation of clusters of long-range interactions that impact different levels of chromatin organization. PMID:21983085

  4. Adaptation and implementation of standardized order sets in a network of multi-hospital corporations in rural Ontario.

    PubMed

    Meleskie, Jessica; Eby, Don

    2009-01-01

    Standardized, preprinted or computer-generated physician orders are an attractive project for organizations that wish to improve the quality of patient care. The successful development and maintenance of order sets is a major undertaking. This article recounts the collaborative experience of the Grey Bruce Health Network in adapting and implementing an existing set of physician orders for use in its three hospital corporations. An Order Set Committee composed of primarily front-line staff was given authority over the order set development, approval and implementation processes. This arrangement bypassed the traditional approval process and facilitated the rapid implementation of a large number of order sets in a short time period.

  5. Fixed-Order Mixed Norm Designs for Building Vibration Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    2000-01-01

    This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  6. TOOLS FOR DIAGNOSING IMPAIRMENT BY NUTRIENTS TO AQUATIC SYSTEMS

    EPA Science Inventory

    The main goal of this project is to provide information needed by States to set nutrient criteria at a level appropriately protective of their water bodies' aquatic life uses. The information that would be generated by this study is critically needed in order for states to use i...

  7. 29 CFR 1960.6 - Designation of agency safety and health officials.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... responsibility to represent effectively the interest and support of the agency head in the management and... Order 12196, and this part; (2) An organization, including provision for the designation of safety and... safety and health program at all operational levels; (3) A set of procedures that ensures effective...

  8. Mental Skills Training Experience of NCAA Division II Softball Catchers

    ERIC Educational Resources Information Center

    Norman, Shannon

    2012-01-01

    Athletes competing at all levels of sport are constantly working on ways to enhance their physical performance. Sport psychology research insists there are higher performance results among athletes who incorporate mental skills training into their practice and competition settings. In order to use the mental skills strategies effectively, athletes…

  9. Establishing the budget for the United States Government for fiscal year 2015 and setting forth appropriate budgetary levels for fiscal years 2016 through 2024.

    THOMAS, 113th Congress

    Rep. Ryan, Paul [R-WI-1

    2014-04-04

    Senate - 04/11/2014 Placed on Senate Legislative Calendar under General Orders. Calendar No. 365. (All Actions) Tracker: This bill has the status Agreed to in HouseHere are the steps for Status of Legislation:

  10. A Comparison of Recent Organic and Inorganic Carbon Isotope Records: Why Do They Covary in Some Settings and Not In Others?

    NASA Astrophysics Data System (ADS)

    Oehlert, A. M.; Swart, P. K.

    2013-12-01

    Covariance between inorganic and organic δ13C records has been used to determine whether a deposit has been altered by diagenesis, how the dynamics of the global carbon cycle changed during the production of the sediments in the deposit, and also for chronostratigraphic correlations. Although covariant records are observed in the ancient geologic record in a variety of depositional environments, such comparisons are not widely applied to modern deposits where definitive data regarding sediment producers, sea level fluctuations, and changes in the global carbon cycle are available. This study uses paired δ13C records from cores collected by the Ocean Drilling Program from three modern periplatform settings (the Great Bahama Bank, the Great Australian Bight, and the Great Barrier Reef), and two pelagic settings (the Walvis Ridge, and the Madingley Rise). These sites were selected in order to assess the influence of several different environmental factors including; sediment and organic matter producers, sediment mineralogy, margin architecture, sea level oscillations, and sediment transport pathways. In the three periplatform settings, multiple cores arranged in a margin to basin transect were analyzed in order to provide insights into the effects of downslope sediment transport. The preliminary results of this study suggest that sea level oscillations and margin architecture may artificially generate a covarying relationship in periplatform sediments that is unrelated to changes in the global carbon cycle. Furthermore, preliminary results from the Walvis Ridge and the Madingley Rise sediments suggest that the relationship between inorganic and organic δ13C records may not always exhibit a positive covariance as is currently assumed for pelagic carbonates.

  11. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    PubMed

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  12. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  13. Anharmonic Vibrational Spectroscopy of the F-(H20)n, complexes, n=1,2

    NASA Technical Reports Server (NTRS)

    Chaban, Galina M.; Xantheas, Sotiris; Gerber, R. Benny; Kwak, Dochan (Technical Monitor)

    2003-01-01

    We report anharmonic vibrational spectra (fundamentals, first overtones) for the F-(H(sub 2)O) and F-(H(sub 2)O)2 clusters computed at the MP2 and CCSD(T) levels of theory with basis sets of triple zeta quality. Anharmonic corrections were estimated via the correlation-corrected vibrational self-consistent field (CC-VSCF) method. The CC-VSCF anharmonic spectra obtained on the potential energy surfaces evaluated at the CCSD(T) level of theory are the first ones reported at a correlated level beyond MP2. We have found that the average basis set effect (TZP vs. aug-cc-pVTZ) is on the order of 30-40 cm(exp -1), whereas the effects of different levels of electron correlation [MP2 vs. CCSD(T)] are smaller, 20-30 cm(exp -1). However, the basis set effect is much larger in the case of the H-bonded O-H stretch of the F-(H(sub 2)O) cluster amounting to 100 cm(exp -1) for the fundamentals and 200 cm (exp -1) for the first overtones. Our calculations are in agreement with the limited available set of experimental data for the F-(H(sub 2)O) and F-(H(sub 2)O)2 systems and provide additional information that can guide further experimental studies.

  14. XZP + 1d and XZP + 1d-DKH basis sets for second-row elements: application to CCSD(T) zero-point vibrational energy and atomization energy calculations.

    PubMed

    Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A

    2012-09-01

    Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).

  15. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  16. Interval-Valued Rank in Finite Ordered Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff; Pogel, Alex; Purvine, Emilie

    We consider the concept of rank as a measure of the vertical levels and positions of elements of partially ordered sets (posets). We are motivated by the need for algorithmic measures on large, real-world hierarchically-structured data objects like the semantic hierarchies of ontolog- ical databases. These rarely satisfy the strong property of gradedness, which is required for traditional rank functions to exist. Representing such semantic hierarchies as finite, bounded posets, we recognize the duality of ordered structures to motivate rank functions which respect verticality both from the bottom and from the top. Our rank functions are thus interval-valued, and alwaysmore » exist, even for non-graded posets, providing order homomorphisms to an interval order on the interval-valued ranks. The concept of rank width arises naturally, allowing us to identify the poset region with point-valued width as its longest graded portion (which we call the “spindle”). A standard interval rank function is naturally motivated both in terms of its extremality and on pragmatic grounds. Its properties are examined, including the relation- ship to traditional grading and rank functions, and methods to assess comparisons of standard interval-valued ranks.« less

  17. Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2016-01-01

    Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.

  18. Geometric Energy Derivatives at the Complete Basis Set Limit: Application to the Equilibrium Structure and Molecular Force Field of Formaldehyde.

    PubMed

    Morgan, W James; Matthews, Devin A; Ringholm, Magnus; Agarwal, Jay; Gong, Justin Z; Ruud, Kenneth; Allen, Wesley D; Stanton, John F; Schaefer, Henry F

    2018-03-13

    Geometric energy derivatives which rely on core-corrected focal-point energies extrapolated to the complete basis set (CBS) limit of coupled cluster theory with iterative and noniterative quadruple excitations, CCSDTQ and CCSDT(Q), are used as elements of molecular gradients and, in the case of CCSDT(Q), expansion coefficients of an anharmonic force field. These gradients are used to determine the CCSDTQ/CBS and CCSDT(Q)/CBS equilibrium structure of the S 0 ground state of H 2 CO where excellent agreement is observed with previous work and experimentally derived results. A fourth-order expansion about this CCSDT(Q)/CBS reference geometry using the same level of theory produces an exceptional level of agreement to spectroscopically observed vibrational band origins with a MAE of 0.57 cm -1 . Second-order vibrational perturbation theory (VPT2) and variational discrete variable representation (DVR) results are contrasted and discussed. Vibration-rotation, anharmonicity, and centrifugal distortion constants from the VPT2 analysis are reported and compared to previous work. Additionally, an initial application of a sum-over-states fourth-order vibrational perturbation theory (VPT4) formalism is employed herein, utilizing quintic and sextic derivatives obtained with a recursive algorithmic approach for response theory.

  19. Depositional framework and sequence stratigraphic aspects of the Coniacian Santonian mixed siliciclastic/carbonate Matulla sediments in Nezzazat and Ekma blocks, Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    El-Azabi, M. H.; El-Araby, A.

    2007-04-01

    Superb outcrops of mixed siliciclastic/carbonate rocks mark the Coniacian-Santonian Matulla Formation exposed in Nezzazat and Ekma blocks, west central Sinai. They are built up of various lithofacies that reflect minor fluctuations in relative sea-level from lower intertidal to slightly deep subtidal settings. Relying on the facies characteristics and stratal geometries, the siliciclastic rocks are divided into seven depositional facies, including beach foreshore laminated sands, upper shoreface cross-bedded sandstone, lower shoreface massive bioturbated and wave-rippled sandstones, shallow subtidal siltstone and deep subtidal shale/claystone. The carbonate rocks comprise lower intertidal lime-mudstone, floatstone and dolostone, shallow subtidal skeletal shoal of oyster rudstone/bioclastic grainstone, and shoal margin packstone. Oolitic grain-ironstone and ferribands are partially intervened the facies types. Deposition has taken place under varied conditions of restricted, partly open marine circulation, low to high wave energy and normal to raised salinity during alternating periods of abundant and ceased clastic supply. The facies types are arranged into asymmetric upward-shallowing cycles that record multiple small-scale transgressive-regressive events. Lime-mudstone and sandstone normally terminate the regressive events. Four sequence boundaries marking regional relative sea-level falls divide the Matulla Formation into three stratigraphic units. These boundaries are Turonian/Coniacian, intra-Coniacian, Coniacian/Santonian and Santonian/Campanian. They do not fit with those sequence boundaries proposed in Haq et al.'s global eustatic curves (1988) except for the sea-level fall associated with the intra-Coniacian boundary. Other sequence boundaries have resulted from regional tectonic impact of the Syrian Arc Fold System that has been initiated in north Egypt during the Latest Turonian-Coniacian. These boundaries enclose three well-defined 3rd order depositional sequences; their enclosing shallowing-upward cycles (i.e. parasequences) record the 4th order relative sea-level fluctuations. 34 and 20 parasequence sets, in the order of a few meters to tens of meters thick, mark the Matulla sequences in Nezzazat and Ekma blocks respectively. Each sequence shows an initial phase of rapid sea-level rise with retrogradational sets, followed by lowering sea-level and progradation/aggradation of the parasequence sets. The transgressive deposits display predominance of deep subtidal lagoonal facies, while highstand deposits show an increase in siliciclastic and carbonate facies with the progressive decrease of lagoonal facies. The sedimentary patterns and environments suggest that the regional, partly eustatic sea-level (i.e. intra-Coniacian) changes controlled the overall architecture of the sequence distribution, whereas changes in the clastic input controlled the variations in facies associations within each depositional sequence.

  20. On basis set superposition error corrected stabilization energies for large n-body clusters.

    PubMed

    Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael

    2011-10-07

    In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics

  1. Paving the COWpath: data-driven design of pediatric order sets

    PubMed Central

    Zhang, Yiye; Padman, Rema; Levin, James E

    2014-01-01

    Objective Evidence indicates that users incur significant physical and cognitive costs in the use of order sets, a core feature of computerized provider order entry systems. This paper develops data-driven approaches for automating the construction of order sets that match closely with user preferences and workflow while minimizing physical and cognitive workload. Materials and methods We developed and tested optimization-based models embedded with clustering techniques using physical and cognitive click cost criteria. By judiciously learning from users’ actual actions, our methods identify items for constituting order sets that are relevant according to historical ordering data and grouped on the basis of order similarity and ordering time. We evaluated performance of the methods using 47 099 orders from the year 2011 for asthma, appendectomy and pneumonia management in a pediatric inpatient setting. Results In comparison with existing order sets, those developed using the new approach significantly reduce the physical and cognitive workload associated with usage by 14–52%. This approach is also capable of accommodating variations in clinical conditions that affect order set usage and development. Discussion There is a critical need to investigate the cognitive complexity imposed on users by complex clinical information systems, and to design their features according to ‘human factors’ best practices. Optimizing order set generation using cognitive cost criteria introduces a new approach that can potentially improve ordering efficiency, reduce unintended variations in order placement, and enhance patient safety. Conclusions We demonstrate that data-driven methods offer a promising approach for designing order sets that are generalizable, data-driven, condition-based, and up to date with current best practices. PMID:24674844

  2. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    PubMed

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  3. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Silicon compilation: From the circuit to the system

    NASA Astrophysics Data System (ADS)

    Obrien, Keven

    The methodology used for the compilation of silicon from a behavioral level to a system level is presented. The aim was to link the heretofore unrelated areas of high level synthesis and system level design. This link will play an important role in the development of future design automation tools as it will allow hardware/software co-designs to be synthesized. A design methodology that alllows, through the use of an intermediate representation, SOLAR, a System level Design Language (SDL), to be combined with a Hardware Description Language (VHDL) is presented. Two main steps are required in order to transform this specification into a synthesizable one. Firstly, a system level synthesis step including partitioning and communication synthesis is required in order to split the model into a set of interconnected subsystems, each of which will be processed by a high level synthesis tool. For this latter step AMICAL is used and this allows powerful scheduling techniques to be used, that accept very abstract descriptions of control flow dominated circuits as input, and interconnected RTL blocks that may feed existing logic-level synthesis tools to be generated.

  5. Direct observation of vibrational energy dispersal via methyl torsions.

    PubMed

    Gardner, Adrian M; Tuttle, William D; Whalley, Laura E; Wright, Timothy G

    2018-02-28

    Explicit evidence for the role of methyl rotor levels in promoting energy dispersal is reported. A set of coupled zero-order vibration/vibration-torsion (vibtor) levels in the S 1 state of para -fluorotoluene ( p FT) are investigated. Two-dimensional laser-induced fluorescence (2D-LIF) and two-dimensional zero-kinetic-energy (2D-ZEKE) spectra are reported, and the assignment of the main features in both sets of spectra reveals that the methyl torsion is instrumental in providing a route for coupling between vibrational levels of different symmetry classes. We find that there is very localized, and selective, dissipation of energy via doorway states, and that, in addition to an increase in the density of states, a critical role of the methyl group is a relaxation of symmetry constraints compared to direct vibrational coupling.

  6. Facilitators and barriers to the use of standing orders for vaccination in obstetrics and gynecology settings.

    PubMed

    Barnard, Juliana G; Dempsey, Amanda F; Brewer, Sarah E; Pyrzanowski, Jennifer; Mazzoni, Sara E; O'Leary, Sean T

    2017-01-01

    Many young and middle-aged women receive their primary health care from their obstetrician-gynecologists. A recent change to vaccination recommendations during pregnancy has forced the integration of new clinical processes at obstetrician-gynecology practices. Evidence-based best practices for vaccination delivery include the establishment of vaccination standing orders. As part of an intervention to increase adoption of evidence-based vaccination strategies for women in safety-net and private obstetrician-gynecology settings, we conducted a qualitative study to identify the facilitators and barriers experienced by obstetrician-gynecology sites when establishing vaccination standing orders. At 6 safety-net and private obstetrician-gynecology practices, 51 semistructured interviews were completed by trained qualitative researchers over 2 years with clinical staff and vaccination program personnel. Standardized qualitative research methods were used during data collection and team-based data analysis to identify major themes and subthemes within the interview data. All study practices achieved partial to full implementation of vaccine standing orders for human papillomavirus, tetanus diphtheria pertussis, and influenza vaccines. Facilitating factors for vaccine standing order adoption included process standardization, acceptance of a continual modification process, and staff training. Barriers to vaccine standing order adoption included practice- and staff-level competing demands, pregnant women's preference for medical providers to discuss vaccine information with them, and staff hesitation in determining HPV vaccine eligibility. With guidance and commitment to integration of new processes, obstetrician-gynecology practices are able to establish vaccine standing orders for pregnant and nonpregnant women. Attention to certain process barriers can aid the adoption of processes to support the delivery of vaccinations in obstetrician-gynecology practice setting, and provide access to preventive health care for many women. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Quantum-chemical investigation of the structures and electronic spectra of the nucleic acid bases at the coupled cluster CC2 level.

    PubMed

    Fleig, Timo; Knecht, Stefan; Hättig, Christof

    2007-06-28

    We study the ground-state structures and singlet- and triplet-excited states of the nucleic acid bases by applying the coupled cluster model CC2 in combination with a resolution-of-the-identity approximation for electron interaction integrals. Both basis set effects and the influence of dynamic electron correlation on the molecular structures are elucidated; the latter by comparing CC2 with Hartree-Fock and Møller-Plesset perturbation theory to second order. Furthermore, we investigate basis set and electron correlation effects on the vertical excitation energies and compare our highest-level results with experiment and other theoretical approaches. It is shown that small basis sets are insufficient for obtaining accurate results for excited states of these molecules and that the CC2 approach to dynamic electron correlation is a reliable and efficient tool for electronic structure calculations on medium-sized molecules.

  8. Sequence stratigraphy and reservoir architecture of the J18/20 and J15 sequences in PM-9, Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, R.A.; Said, Md.J.; Bedingfield, J.R.

    1994-07-01

    The group J stratigraphic interval is lower Miocene (18.5-21 Ma) in age and was deposited during the early sag phase of the Malay Basin structural development. Reduction in depositional relief and first evidence of widespread marine influence characterize the transition into this interval. Twelve group J sequences have been identified. Reservoirs consist of progradational to aggradational tidally-dominated paralic to shallow marine sands deposited in the lowstand systems tract. Transgressive and highstand deposits are dominantly offshore shales. In PM-9, the original lift-related depocenters, coupled with changes in relative sea level, have strongly influenced group J unit thickness and the distribution ofmore » reservoir and seal facies. Two important reservoir intervals in PM-9 are the J18/20 and J15 sands. The reservoirs in these intervals are contained within the lowstand systems tracts of fourth-order sequences. These fourth-order sequences stack to form sequence sets in response to a third-order change in relative sea level. The sequences of the J18/20 interval stack to form part of a lowstand sequence set, whereas the J15 interval forms part of the transgressive sequence set. Reservoir facies range from tidal bars and subtidal shoals in the J18/20 interval to lower shoreface sands in the J15. Reservoir quality and continuity in group J reservoirs are dependent on depositional facies. An understanding of the controls on the distribution of facies types is crucial to the success of the current phase of field development and exploration programs in PM-9.« less

  9. Propagation of solutions to the Fisher-KPP equation with slowly decaying initial data

    NASA Astrophysics Data System (ADS)

    Henderson, Christopher

    2016-10-01

    The Fisher-KPP equation is a model for population dynamics that has generated a huge amount of interest since its introduction in 1937. The speed with which a population spreads has been computed quite precisely when the initial data, u 0, decays exponentially. More recently, though, the case when the initial data decays more slowly has been studied. In Hamel F and Roques L (2010 J. Differ. Equ. 249 1726-45), the authors show that the level sets of height of m of u move super-linearly and may be bounded above and below by expressions of the form u0-1≤ft({{c}m}{{\\text{e}}-t}\\right) when u 0 decays algebraically of a small enough order. The constants c m for the upper and lower bounds that they obtain are not explicit and do not match. In this paper, we improve their precision for a broader class of initial data and for a broader class of equations. In particular, our approach yields the explicit highest order term in the location of the level sets, which in the most basic setting is given by u0-1≤ft(m{{\\text{e}}-t}/(1-m)\\right) as long as u 0 decays slower than {{\\text{e}}-\\sqrt{x}} . We generalize this to the previously unstudied setting when the nonlinearity is periodic in space. In addition, for large times, we characterize the profile of the solution in terms of a generalized logistic equation.

  10. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes

    NASA Astrophysics Data System (ADS)

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-01

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  11. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes.

    PubMed

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-28

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  12. Entanglement spectrum as a generalization of entanglement entropy: identification of topological order in non-Abelian fractional quantum Hall effect states.

    PubMed

    Li, Hui; Haldane, F D M

    2008-07-04

    We study the "entanglement spectrum" (a presentation of the Schmidt decomposition analogous to a set of "energy levels") of a many-body state, and compare the Moore-Read model wave function for the nu=5/2 fractional quantum Hall state with a generic 5/2 state obtained by finite-size diagonalization of the second-Landau-level-projected Coulomb interactions. Their spectra share a common "gapless" structure, related to conformal field theory. In the model state, these are the only levels, while in the "generic" case, they are separated from the rest of the spectrum by a clear "entanglement gap", which appears to remain finite in the thermodynamic limit. We propose that the low-lying entanglement spectrum can be used as a "fingerprint" to identify topological order.

  13. Does user-centred design affect the efficiency, usability and safety of CPOE order sets?

    PubMed Central

    Chan, Julie; Shojania, Kaveh G; Easty, Anthony C

    2011-01-01

    Background Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. Objective We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). Participants 27staff physicians, residents and medical students. Setting Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Main Measures Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). Results 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. Conclusions The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation. PMID:21486886

  14. Evaluation research of small and medium-sized enterprise informatization on big data

    NASA Astrophysics Data System (ADS)

    Yang, Na

    2017-09-01

    Under the background of big data, key construction of small and medium-sized enterprise informationization level was needed, but information construction cost was large, while information cost of inputs can bring benefit to small and medium-sized enterprises. This paper established small and medium-sized enterprise informatization evaluation system from hardware and software security level, information organization level, information technology application and the profit level, and information ability level. The rough set theory was used to brief indexes, and then carry out evaluation by support vector machine (SVM) model. At last, examples were used to verify the theory in order to prove the effectiveness of the method.

  15. Low frequency vibrational spectra and the nature of metal-oxygen bond of alkaline earth metal acetylacetonates

    NASA Astrophysics Data System (ADS)

    Fakheri, Hamideh; Tayyari, Sayyed Faramarz; Heravi, Mohammad Momen; Morsali, Ali

    2017-12-01

    Theoretical quantum chemistry calculations were used to assign the observed vibrational band frequencies of Be, Mg, Ca, Sr, and Ba acetylacetonates complexes. Density functional theory (DFT) calculations have been carried out at the B3LYP level, using LanL2DZ, def2SVP, and mixed, GenECP, (def2SVP for metal ions and 6-311++G** for all other atoms) basis sets. The B3LYP level, with mixed basis sets, was utilized for calculations of vibrational frequencies, IR intensity, and Raman activity. Analysis of the vibrational spectra indicates that there are several bands which could almost be assigned mainly to the metal-oxygen vibrations. The strongest Raman band in this region could be used as a measure of the stability of the complex. The effects of central metal on the bond orders and charge distributions in alkaline earth metal acetylacetonates were studied by the Natural Bond Orbital (NBO) method for fully optimized compounds. Optimization were performed at the B3LYP/6-311++G** level for the lighter alkaline earth metal complexes (Be, Mg, and Ca acetylacetonates) while the B3LYP level, using LanL2DZ (extrabasis, d and f on oxygen and metal atoms), def2SVP and mixed (def2SVP on metal ions and 6-311++G** for all other atoms) basis sets for all understudy complexes. Calculations indicate that the covalence nature of metal-oxygen bonds considerably decreases from Be to Ba complexes. The nature of metal-oxygen bond was further studied by using Atoms In Molecules (AIM) analysis. The topological parameters, Wiberg bond orders, natural charges of O and metal ions, and also some vibrational band frequencies were correlated with the stability constants of understudy complexes.

  16. Survey of Library and Information Manpower Needs in the Caribbean. (Preliminary Version).

    ERIC Educational Resources Information Center

    Moore, Nick

    In order to provide a base for national information planning and the restructuring of existing training institutions, a detailed study was conducted of manpower needs--at professional, paraprofessional, and technician levels--for information systems and services in the Caribbean region. A paper setting out the basic principles underlying manpower…

  17. Building Capacity through International Student Affairs Exchange

    ERIC Educational Resources Information Center

    Roberts, Dennis C.; Roberts, Darbi L.

    2012-01-01

    In order to build local capacity in an international higher education setting, the Qatar Study Tour and Young Professionals Institute (QST and YPI) was created as an inquiry-based learning experience shared among diverse participants and designed to enhance learning at both the local and international levels. The intent of the QST and YPI model…

  18. Massachusetts Study of Teacher Supply and Demand: Trends and Projections

    ERIC Educational Resources Information Center

    Levin, Jesse; Berg-Jacobson, Alex; Atchison, Drew; Lee, Katelyn; Vontsolos, Emily

    2015-01-01

    In April 2015, the Massachusetts Department of Elementary and Secondary Education (ESE) commissioned American Institutes for Research (AIR) to develop a comprehensive set of 10-year projections of teacher supply and demand in order to inform planning for future workforce needs. This included state-level projections both in the aggregate, as well…

  19. Campus Schools: The Search for Safe and Orderly Environment in Large School Settings

    ERIC Educational Resources Information Center

    Ortiz, Monica

    2012-01-01

    Establishing "new small schools" is a major focus of school improvement, especially at the high school level, with the hopes of increasing academic success and reducing violence. Key arguments for small schools are the personalization of schooling and increased academic performance. The structures and process of small schools are…

  20. Student Perspectives on Quality Teaching: Words and Images

    ERIC Educational Resources Information Center

    Bell, Athene; Ewaida, Marriam; Lynch, Megan R.; Zenkov, Kristien

    2011-01-01

    This article reports on the findings of a photography and literacy project ("Through Students' Eyes") the authors conducted with middle level English language learners and alternative high school youth from a mid-Atlantic (US) ex-urban area. In order to bridge middle and high school settings, the authors used multimodal and photo…

  1. Exploring the Classroom: Teaching Science in Early Childhood

    ERIC Educational Resources Information Center

    Dejonckheere, Peter J. N.; de Wit, Nele; van de Keere, Kristof; Vervaet, Stephanie

    2016-01-01

    This study tested and integrated the effects of an inquiry-based didactic method for preschool science in a real practical classroom setting. Four preschool classrooms participated in the experiment (N = 57) and the children were 4-6 years old. In order to assess children's attention for causal events and their understanding at the level of…

  2. Exploring the Classroom: Teaching Science in Early Childhood

    ERIC Educational Resources Information Center

    Dejonckheere, Peter J. N.; De Wit, Nele; Van de Keere, Kristof; Vervaet, Stephanie

    2016-01-01

    This study tested and integrated the effects of an inquiry-based didactic method for preschool science in a real practical classroom setting. Four preschool classrooms participated in the experiment (N= 57) and the children were 4-6 years old. In order to assess children's attention for causal events and their understanding at the level of…

  3. Comparisons of Air Radiation Model with Shock Tube Measurements

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; McCorkle, Evan; Bogdanoff, David W.; Allen, Gary A., Jr.

    2009-01-01

    This paper presents an assessment of the predictive capability of shock layer radiation model appropriate for NASA s Orion Crew Exploration Vehicle lunar return entry. A detailed set of spectrally resolved radiation intensity comparisons are made with recently conducted tests in the Electric Arc Shock Tube (EAST) facility at NASA Ames Research Center. The spectral range spanned from vacuum ultraviolet wavelength of 115 nm to infrared wavelength of 1400 nm. The analysis is done for 9.5-10.5 km/s shock passing through room temperature synthetic air at 0.2, 0.3 and 0.7 Torr. The comparisons between model and measurements show discrepancies in the level of background continuum radiation and intensities of atomic lines. Impurities in the EAST facility in the form of carbon bearing species are also modeled to estimate the level of contaminants and their impact on the comparisons. The discrepancies, although large is some cases, exhibit order and consistency. A set of tests and analyses improvements are proposed as forward work plan in order to confirm or reject various proposed reasons for the observed discrepancies.

  4. Toward Defining, Measuring, and Evaluating LGBT Cultural Competence for Psychologists

    PubMed Central

    Boroughs, Michael S.; Andres Bedoya, C.; O'Cleirigh, Conall; Safren, Steven A.

    2015-01-01

    A central part of providing evidence-based practice is appropriate cultural competence to facilitate psychological assessment and intervention with diverse clients. At a minimum, cultural competence with lesbian, gay, bisexual, and transgender (LGBT) people involves adequate scientific and supervised practical training, with increasing depth and complexity across training levels. In order to further this goal, we offer 28 recommendations of minimum standards moving toward ideal training for LGBT-specific cultural competence. We review and synthesize the relevant literature to achieve and assess competence across the various levels of training (doctoral, internship, post-doctoral, and beyond) in order to guide the field towards best practices. These recommendations are aligned with educational and practice guidelines set forth by the field and informed by other allied professions in order to provide a roadmap for programs, faculty, and trainees in improving the training of psychologists to work with LGBT individuals. PMID:26279609

  5. Nothing to it: Precursors to a Zero Concept in Preschoolers

    PubMed Central

    Merritt, Dustin J.; Brannon, Elizabeth M.

    2013-01-01

    Do young children understand the numerical value of empty sets prior to developing a concept of symbolic zero? Are empty sets represented as mental magnitudes? In order to investigate these questions, we tested 4-year old children and adults with a numerical ordering task in which the goal was to select two stimuli in ascending numerical order with occasional empty set stimuli. Both children and adults showed distance effects for empty sets.. Children who were unable to order the symbol zero (e.g., 0 < 1), but who successfully ordered countable integers (e.g., 2 < 4) nevertheless showed distance effects with empty sets. These results suggest that empty sets are represented on the same numerical continuum as non-empty sets and that children represent empty sets numerically prior to understanding symbolic zero. PMID:23219980

  6. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  7. On a production system using default reasoning for pattern classification

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Lowe, Carlyle M.

    1990-01-01

    This paper addresses an unconventional application of a production system to a problem involving belief specialization. The production system reduces a large quantity of low-level descriptions into just a few higher-level descriptions that encompass the problem space in a more tractable fashion. This classification process utilizes a set of descriptions generated by combining the component hierarchy of a physical system with the semantics of the terminology employed in its operation. The paper describes an application of this process in a program, constructed in C and CLIPS, that classifies signatures of electromechanical system configurations. The program compares two independent classifications, describing the actual and expected system configurations, in order to generate a set of contradictions between the two.

  8. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  9. Construction of Weak and Strong Similarity Measures for Ordered Sets of Documents Using Fuzzy Set Techniques.

    ERIC Educational Resources Information Center

    Egghe, L.; Michel, C.

    2003-01-01

    Ordered sets (OS) of documents are encountered more and more in information distribution systems, such as information retrieval systems. Classical similarity measures for ordinary sets of documents need to be extended to these ordered sets. This is done in this article using fuzzy set techniques. The practical usability of the OS-measures is…

  10. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    NASA Astrophysics Data System (ADS)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to compare the hydrodynamic forces and the attitudes assumed at different velocities. A very good agreement between numerical and experimental results demonstrates the reliability of the single-phase level set approach for the predictions of high Froude numbers flows.

  11. Priority setting in health care: on the relation between reasonable choices on the micro-level and the macro-level.

    PubMed

    Baerøe, Kristine

    2008-01-01

    There has been much discussion about how to obtain legitimacy at macro-level priority setting in health care by use of fair procedures, but how should we consider priority setting by individual clinicians or health workers at the micro-level? Despite the fact that just health care totally hinges upon their decisions, surprisingly little attention seems being paid to the legitimacy of these decisions. This paper addresses the following question: what are the conditions that have to be met in order to ensure that individual claims on health care are well aligned with an overall concept of just health care? Drawing upon a distinction between individual and aggregated needs, I argue that even though we assume the legitimacy of macro-level guidelines, this legitimacy is not directly transferable to decisions at micro-level simply by adherence to the guidelines' recommendation. Further, I argue that individual claims are subject to the formal principle of equality and the demands of vertical and horizontal equity in a way that gives context- and patient-related equity concerns precedence over equity concerns captured at the macro-level. I conclude that if we aim to achieve just health care, we need to develop a complementary framework for legitimising individual judgment of patients' claims on health care resources. Moreover, I suggest the basic structure of such a framework.

  12. A large-scale, long-term study of scale drift: The micro view and the macro view

    NASA Astrophysics Data System (ADS)

    He, W.; Li, S.; Kingsbury, G. G.

    2016-11-01

    The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.

  13. Work management plan for data systems and analysis directorate

    NASA Technical Reports Server (NTRS)

    Nichols, L. R.

    1979-01-01

    A contract with the Data Systems and Analysis Directorate contains a specified level of resources related to a specific set of work in support of three divisions within the Data Systems and Analysis Directorate. The divisions are Institutional Data Systems Division, Ground Data Systems Division, and Mission Planning and Analysis Division. The Statement of work defines at a functional requirements level the type of support to be provided to the three divisions. The contract provides for further technical direction to the contractor through issuance of Job Orders. The Job order is the prime method of further defining the work to be done, allocating a portion of the total resources in the contract to the defined tasks, and further delegating technical responsibility.

  14. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements

    NASA Astrophysics Data System (ADS)

    Hill, J. Grant; Peterson, Kirk A.

    2017-12-01

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  15. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  16. Does user-centred design affect the efficiency, usability and safety of CPOE order sets?

    PubMed

    Chan, Julie; Shojania, Kaveh G; Easty, Anthony C; Etchells, Edward E

    2011-05-01

    Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). 27 staff physicians, residents and medical students. Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation.

  17. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschamps, T; Schwartz, P; Trebotich, D

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less

  18. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  19. Comparison of display enhancement with intelligent decision-aiding

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex; Markert, Wendy J.; Kossack, Merrick

    1992-01-01

    Currently, two main approaches exist for improving the human-machine interface component of a system in order to improve overall system performance, display enhancement and intelligent decision aiding. Each of these two approaches has its own set of advantages and disadvantages, as well as introduce its own set of additional performance problems. These characteristics should help identify which types of problem situations and domains are better aided by which type of strategy. The characteristic issues are described of these two decision aiding strategies. Then differences in expert and novice decision making are described in order to help determine whether a particular strategy may be better for a particular type of user. Finally, research is outlined to compare and contrast the two technologies, as well as to examine the interaction effects introduced by the different skill levels and the different methods for training operators.

  20. Thermal Noise Limit in Frequency Stabilization of Lasers with Rigid Cavities

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Kemery, Amy; Camp, Jordan

    2005-01-01

    We evaluated thermal noise (Brownian motion) in a rigid reference cavity Used for frequency stabilization of lasers, based on the mechanical loss of cavity materials and the numerical analysis of the mirror-spacer mechanics with the direct application of the fluctuation dissipation theorem. This noise sets a fundamental limit for the frequency stability achieved with a rigid frequency-reference cavity of order 1 Hz/rtHz at 10mHz at room temperature. This level coincides with the world-highest level stabilization results.

  1. Feasibility Assessment of a Structurally Closable, Automatable Technique, for the Deglycerolization of Frozen RBC (Human). Phase 2

    DTIC Science & Technology

    1996-04-01

    to biocompatible levels. associated with collection, testing , inventory and trans- Thawed cells are processed in order to reduce both the glycerol... tested . The instrument is capable of prediluting and washing up to two units of blood per tube set. Our results show that when the performance of our...concentration of glycerol has to be reduced to biocompatible levels (from 1.57 M to less than 0.1 M). Freezing and thawing red blood cells leads to

  2. Identifying U.S. Marine Corps Recruit Characteristics That Correspond to Success in Specific Occupational Fields

    DTIC Science & Technology

    2016-06-01

    Reserve Affairs MAGTF Marine Air Ground Task Force MC Mechanical Comprehension MCMAP Marine Corps Martial Arts Program MCO Marine Corps Order MCRC... Martial Arts Program (MCMAP) belt level. Setting the MCMCAP belt level to “NOT TRAINED” is required to maintain the MCMAP records for future analysis...BELT MARTIAL ARTS INSTRUCTOR 60 MMG BROWN BELT MARTIAL ARTS INSTRUCTOR 70 MMJ BLACK BELT, 1ST DEGREE MARTIAL ARTS INSTRUCTOR 80 MMK BLACK BELT, 1ST

  3. On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1993-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.

  4. Standardized network order sets in rural Ontario: a follow-up report on successes and sustainability.

    PubMed

    Rawn, Andrea; Wilson, Katrina

    2011-01-01

    Unifying, implementing and sustaining a large order set project requires strategic placement of key organizational professionals to provide ongoing user education, communication and support. This article will outline the successful strategies implemented by the Grey Bruce Health Network, Evidence-Based Care Program to reduce length of stay, increase patient satisfaction and increase the use of best practices resulting in quality outcomes, safer practice and better allocation of resources by using standardized Order Sets within a network of 11 hospital sites. Audits conducted in 2007 and again in 2008 revealed a reduced length of stay of 0.96 in-patient days when order sets were used on admission and readmission for the same or a related diagnosis within one month decreased from 5.5% without order sets to 3.5% with order sets.

  5. 76 FR 60572 - Self-Regulatory Organizations; The Options Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-29

    ..., 2008), 73 FR 42646 (July 22, 2008) (SR-OCC-2007-20)); give itself time to prepare updated comparative...\\ Proposed Interpretation and Policy .01 to OCC Rule 1001. The new formula is designed to more directly take...\\ \\10\\ Note the comparative data described in this paragraph was obtained using confidence levels set at...

  6. Understanding Primary Science: Ideas, Concepts and Explanations. Second Edition

    ERIC Educational Resources Information Center

    Wenham, Martin

    2005-01-01

    This book has been written to help teachers develop the background knowledge and understanding needed to teach science effectively at primary level. It is intended principally as a resource in attempting to set out facts, develop concepts, and explain theories which primary teachers may find it useful to know and understand in order to plan…

  7. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... identified in its State's assessment policy. (c) Post-test. A local eligible provider must— (1) Administer a post-test to measure a student's educational functioning level after a set time period or number of instructional hours; (2) Administer the post-test to students at a uniform time, according to its State's...

  8. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... identified in its State's assessment policy. (c) Post-test. A local eligible provider must— (1) Administer a post-test to measure a student's educational functioning level after a set time period or number of instructional hours; (2) Administer the post-test to students at a uniform time, according to its State's...

  9. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... identified in its State's assessment policy. (c) Post-test. A local eligible provider must— (1) Administer a post-test to measure a student's educational functioning level after a set time period or number of instructional hours; (2) Administer the post-test to students at a uniform time, according to its State's...

  10. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... identified in its State's assessment policy. (c) Post-test. A local eligible provider must— (1) Administer a post-test to measure a student's educational functioning level after a set time period or number of instructional hours; (2) Administer the post-test to students at a uniform time, according to its State's...

  11. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... identified in its State's assessment policy. (c) Post-test. A local eligible provider must— (1) Administer a post-test to measure a student's educational functioning level after a set time period or number of instructional hours; (2) Administer the post-test to students at a uniform time, according to its State's...

  12. Examining Education Leadership Communication Practices around Basic and Advanced Skill Sets: A Multiple Case Study

    ERIC Educational Resources Information Center

    Minger, Leslie

    2017-01-01

    The purpose of this multiple case study was to explore and describe the leadership communication practices of school principals in Southern California schools with demonstrated high levels of academic performance in order to identify practices that might be replicated in other schools. Communication practices were studied in relation to two…

  13. Multidisciplinary Case Study on Higher Education: An Innovative Experience in the Business Management Degree

    ERIC Educational Resources Information Center

    Martínez-Cañas, Ricardo; del Pozo-Rubio, Raúl; Mondéjar-Jiménez, José; Ruiz-Palomino, Pablo

    2012-01-01

    Higher education is constantly changing and looking for innovative educational solutions in order to increase the level of the student's knowledge and skills. As an important part of this set of educational policies, a new process is emerging for the ideation, planning and implementation of multidisciplinary case studies for students with the aim…

  14. The Relationship of High School Students in Inclusive Settings: Emotional Health and Academic Achievement

    ERIC Educational Resources Information Center

    Wilson, Carolyn H.; Stith-Russell, Lafawndra S.

    2010-01-01

    Academic success has become increasingly important in determining future quality of life. Many educational programs and institutions at various levels stress the need for students to score well on standardized tests and other methods of evaluation, in order to demonstrate their knowledge of various concepts and skills. The relationship between…

  15. Distribution of Penicillin G Residues in Culled Dairy Cow Muscles: Implications for Residue Monitoring

    USDA-ARS?s Scientific Manuscript database

    The U.S. Food and Drug Administration sets tolerances for veterinary drug residues in muscle, but does not specify which type of muscle should be analyzed. In order to determine if antibiotic residue levels are dependent on muscle type, 7 culled dairy cows were dosed with Penicillin G (Pen G) from ...

  16. Producing Distance Learning Materials: Cash and Other Constraints.

    ERIC Educational Resources Information Center

    Whitehead, Don J.

    In order to develop a financial plan for and identify constraints on the production of distance learning materials, a total human resources development (HRD) plan must be produced, and endorsed by the highest level of management. The HRD plan sets out the human resources needed to secure the organization's future in terms of people and their…

  17. Cognitive workload reduction in hospital information systems : Decision support for order set optimization.

    PubMed

    Gartner, Daniel; Zhang, Yiye; Padman, Rema

    2018-06-01

    Order sets are a critical component in hospital information systems that are expected to substantially reduce physicians' physical and cognitive workload and improve patient safety. Order sets represent time interval-clustered order items, such as medications prescribed at hospital admission, that are administered to patients during their hospital stay. In this paper, we develop a mathematical programming model and an exact and a heuristic solution procedure with the objective of minimizing physicians' cognitive workload associated with prescribing order sets. Furthermore, we provide structural insights into the problem which lead us to a valid lower bound on the order set size. In a case study using order data on Asthma patients with moderate complexity from a major pediatric hospital, we compare the hospital's current solution with the exact and heuristic solutions on a variety of performance metrics. Our computational results confirm our lower bound and reveal that using a time interval decomposition approach substantially reduces computation times for the mathematical program, as does a K -means clustering based decomposition approach which, however, does not guarantee optimality because it violates the lower bound. The results of comparing the mathematical program with the current order set configuration in the hospital indicates that cognitive workload can be reduced by about 20.2% by allowing 1 to 5 order sets, respectively. The comparison of the K -means based decomposition with the hospital's current configuration reveals a cognitive workload reduction of about 19.5%, also by allowing 1 to 5 order sets, respectively. We finally provide a decision support system to help practitioners analyze the current order set configuration, the results of the mathematical program and the heuristic approach.

  18. An ab initio benchmark study of the H + CO --> HCO reaction

    NASA Technical Reports Server (NTRS)

    Woon, D. E.

    1996-01-01

    The H + CO --> HCO reaction has been characterized with correlation consistent basis sets at five levels of theory in order to benchmark the sensitivities of the barrier height and reaction ergicity to the one-electron and n-electron expansions of the electronic wave function. Single and multireference methods are compared and contrasted. The coupled cluster method RCCSD(T) was found to be in very good agreement with Davidson-corrected internally-contracted multireference configuration interaction (MRCI+Q). Second-order Moller-Plesset perturbation theory (MP2) was also employed. The estimated complete basis set (CBS) limits for the barrier height (in kcal/mol) for the five methods, including harmonic zero-point energy corrections, are MP2, 4.66; RCCSD, 4.78; RCCSD(T), 4.15; MRCI, 5.10; and MRCI+Q, 4.07. Similarly, the estimated CBS limits for the ergicity of the reaction are: MP2, -17.99; RCCSD, -13.34; RCCSD(T), -13.79; MRCI, -11.46; and MRCI+Q, -13.70. Additional basis set explorations for the RCCSD(T) method demonstrate that aug-cc-pVTZ sets, even with some functions removed, are sufficient to reproduce the CBS limits to within 0.1-0.3 kcal/mol.

  19. Promoting Evidence-Based Practice at a Primary Stroke Center: A Nurse Education Strategy.

    PubMed

    Case, Christina Anne

    Promoting a culture of evidence-based practice within a health care facility is a priority for health care leaders and nursing professionals; however, tangible methods to promote translation of evidence to bedside practice are lacking. The purpose of this quality improvement project was to design and implement a nursing education intervention demonstrating to the bedside nurse how current evidence-based guidelines are used when creating standardized stroke order sets at a primary stroke center, thereby increasing confidence in the use of standardized order sets at the point of care and supporting evidence-based culture within the health care facility. This educational intervention took place at a 286-bed community hospital certified by the Joint Commission as a primary stroke center. Bedside registered nurse (RN) staff from 4 units received a poster presentation linking the American Heart Association's and American Stroke Association's current evidence-based clinical practice guidelines to standardized stroke order sets and bedside nursing care. The 90-second oral poster presentation was delivered by a graduate nursing student during preshift huddle. The poster and supplemental materials remained in the unit break room for 1 week for RN viewing. After the pilot unit, a pdf of the poster was also delivered via an e-mail attachment to all RNs on the participating unit. A preintervention online survey measured nurses' self-perceived likelihood of performing an ordered intervention based on whether they were confident the order was evidence based. The preintervention survey also measured nurses' self-reported confidence in their ability to explain how the standardized order sets are derived from current evidence. The postintervention online survey again measured nurses' self-reported confidence level. However, the postintervention survey was modified midway through data collection, allowing for the final 20 survey respondents to retrospectively rate their confidence before and after the educational intervention. This modification ensured that the responses for each individual participant in this group were matched. Registered nurses reported a significant increase in perceived confidence in ability to explain how standardized stroke order sets reflect current evidence after the intervention (n = 20, P < .001). This sample was matched for each individual respondent. No significant change was shown in unmatched group mean self-reported confidence ratings overall after the intervention or separately by unit for the progressive care unit, critical care unit, or intensive care unit (n = 89 preintervention, n = 43 postintervention). However, the emergency department demonstrated a significant increase in group mean perceived confidence scores (n = 20 preintervention, n = 11 postintervention, P = .020). Registered nurses reported a significantly higher self-perceived likelihood of performing an ordered nursing intervention when they were confident that the order was evidence based compared with if they were unsure the order was evidence based (n = 88, P < .001). This nurse education strategy increased RNs' confidence in ability to explain the path from evidence to bedside nursing care by demonstrating how evidence-based clinical practice guidelines provide current evidence used to create standardized order sets. Although further evaluation of the intervention's effectiveness is needed, this educational intervention has the potential for generalization to different types of standardized order sets to increase nurse confidence in utilization of evidence-based practice.

  20. How important are autonomy and work setting to nurse practitioners' job satisfaction?

    PubMed

    Athey, Erin K; Leslie, Mayri Sagady; Briggs, Linda A; Park, Jeongyoung; Falk, Nancy L; Pericak, Arlene; El-Banna, Majeda M; Greene, Jessica

    2016-06-01

    Nurse practitioners (NPs) have reported aspects of their jobs that they are more and less satisfied with. However, few studies have examined the factors that predict overall job satisfaction. This study uses a large national sample to examine the extent to which autonomy and work setting predict job satisfaction. The 2012 National Sample Survey of Nurse Practitioners (n = 8311) was used to examine bivariate and multivariate relationships between work setting and three autonomy variables (independent billing practices, having one's NP skills fully utilized, and relationship with physician), and job satisfaction. NPs working in primary care reported the highest levels of autonomy across all three autonomy measures, while those working in hospital surgical settings reported the lowest levels. Autonomy, specifically feeling one's NP skills were fully utilized, was the factor most predictive of satisfaction. In multivariate analyses, those who strongly agreed their skills were being fully utilized had satisfaction scores almost one point higher than those who strongly disagreed. Work setting was only marginally related to job satisfaction. In order to attract and retain NPs in the future, healthcare organizations should ensure that NPs' skills are being fully utilized. ©2015 American Association of Nurse Practitioners.

  1. Comments on X. Yin, A. Wen, Y. Chen, and T. Wang, `Studies in an optical millimeter-wave generation scheme via two parallel dual-parallel Mach-Zehnder modulators', Journal of Modern Optics, 58(8), 2011, pp. 665-673

    NASA Astrophysics Data System (ADS)

    Hasan, Mehedi; Maldonado-Basilio, Ramón; Hall, Trevor J.

    2015-04-01

    Yin et al. have described an innovative filter-less optical millimeter-wave generation scheme for octotupling of a 10 GHz RF oscillator, or sedecimtupling of a 5 GHz RF oscillator using two parallel dual-parallel Mach-Zehnder modulators (DP-MZMs). The great merit of their design is the suppression of all harmonics except those of order ? (octotupling) or all harmonics except those of order ? (sedecimtupling), where ? is an integer. A demerit of their scheme is the requirement to set a precise RF signal modulation index in order to suppress the zeroth order optical carrier. The purpose of this comment is to show that, in the case of the octotupling function, all harmonics may be suppressed except those of order ?, where ? is an odd integer, by the simple addition of an optical ? phase shift between the two DP-MZMs and an adjustment of the RF drive phases. Since the carrier is suppressed in the modified architecture, the octotupling circuit is thereby released of the strict requirement to set the drive level to a precise value without any significant increase in circuit complexity.

  2. Formal verification of a microcoded VIPER microprocessor using HOL

    NASA Technical Reports Server (NTRS)

    Levitt, Karl; Arora, Tejkumar; Leung, Tony; Kalvala, Sara; Schubert, E. Thomas; Windley, Philip; Heckman, Mark; Cohen, Gerald C.

    1993-01-01

    The Royal Signals and Radar Establishment (RSRE) and members of the Hardware Verification Group at Cambridge University conducted a joint effort to prove the correspondence between the electronic block model and the top level specification of Viper. Unfortunately, the proof became too complex and unmanageable within the given time and funding constraints, and is thus incomplete as of the date of this report. This report describes an independent attempt to use the HOL (Cambridge Higher Order Logic) mechanical verifier to verify Viper. Deriving from recent results in hardware verification research at UC Davis, the approach has been to redesign the electronic block model to make it microcoded and to structure the proof in a series of decreasingly abstract interpreter levels, the lowest being the electronic block level. The highest level is the RSRE Viper instruction set. Owing to the new approach and some results on the proof of generic interpreters as applied to simple microprocessors, this attempt required an effort approximately an order of magnitude less than the previous one.

  3. Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.

    2015-01-01

    In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.

  4. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  5. NCAR Earth Observing Laboratory's Data Tracking System

    NASA Astrophysics Data System (ADS)

    Cully, L. E.; Williams, S. F.

    2014-12-01

    The NCAR Earth Observing Laboratory (EOL) maintains an extensive collection of complex, multi-disciplinary datasets from national and international, current and historical projects accessible through field project web pages (https://www.eol.ucar.edu/all-field-projects-and-deployments). Data orders are processed through the EOL Metadata Database and Cyberinfrastructure (EMDAC) system. Behind the scenes is the institutionally created EOL Computing, Data, and Software/Data Management Group (CDS/DMG) Data Tracking System (DTS) tool. The DTS is used to track the complete life cycle (from ingest to long term stewardship) of the data, metadata, and provenance for hundreds of projects and thousands of data sets. The DTS is an EOL internal only tool which consists of three subsystems: Data Loading Notes (DLN), Processing Inventory Tool (IVEN), and Project Metrics (STATS). The DLN is used to track and maintain every dataset that comes to the CDS/DMG. The DLN captures general information such as title, physical locations, responsible parties, high level issues, and correspondence. When the CDS/DMG processes a data set, IVEN is used to track the processing status while collecting sufficient information to ensure reproducibility. This includes detailed "How To" documentation, processing software (with direct links to the EOL Subversion software repository), and descriptions of issues and resolutions. The STATS subsystem generates current project metrics such as archive size, data set order counts, "Top 10" most ordered data sets, and general information on who has ordered these data. The DTS was developed over many years to meet the specific needs of the CDS/DMG, and it has been successfully used to coordinate field project data management efforts for the past 15 years. This paper will describe the EOL CDS/DMG Data Tracking System including its basic functionality, the provenance maintained within the system, lessons learned, potential improvements, and future developments.

  6. Estimating relative sea-level rise and submergence potential at a coastal wetland

    USGS Publications Warehouse

    Cahoon, Donald R.

    2015-01-01

    A tide gauge records a combined signal of the vertical change (positive or negative) in the level of both the sea and the land to which the gauge is affixed; or relative sea-level change, which is typically referred to as relative sea-level rise (RSLR). Complicating this situation, coastal wetlands exhibit dynamic surface elevation change (both positive and negative), as revealed by surface elevation table (SET) measurements, that is not recorded at tide gauges. Because the usefulness of RSLR is in the ability to tie the change in sea level to the local topography, it is important that RSLR be calculated at a wetland that reflects these local dynamic surface elevation changes in order to better estimate wetland submergence potential. A rationale is described for calculating wetland RSLR (RSLRwet) by subtracting the SET wetland elevation change from the tide gauge RSLR. The calculation is possible because the SET and tide gauge independently measure vertical land motion in different portions of the substrate. For 89 wetlands where RSLRwet was evaluated, wetland elevation change differed significantly from zero for 80 % of them, indicating that RSLRwet at these wetlands differed from the local tide gauge RSLR. When compared to tide gauge RSLR, about 39 % of wetlands experienced an elevation rate surplus and 58 % an elevation rate deficit (i.e., sea level becoming lower and higher, respectively, relative to the wetland surface). These proportions were consistent across saltmarsh, mangrove, and freshwater wetland types. Comparison of wetland elevation change and RSLR is confounded by high levels of temporal and spatial variability, and would be improved by co-locating tide gauge and SET stations near each other and obtaining long-term records for both.

  7. Evaluation and implementation of chemotherapy regimen validation in an electronic health record.

    PubMed

    Diaz, Amber H; Bubalo, Joseph S

    2014-12-01

    Computerized provider order entry of chemotherapy regimens is quickly becoming the standard for prescribing chemotherapy in both inpatient and ambulatory settings. One of the difficulties with implementation of chemotherapy regimen computerized provider order entry lies in verifying the accuracy and completeness of all regimens built in the system library. Our goal was to develop, implement, and evaluate a process for validating chemotherapy regimens in an electronic health record. We describe our experience developing and implementing a process for validating chemotherapy regimens in the setting of a standard, commercially available computerized provider order entry system. The pilot project focused on validating chemotherapy regimens in the adult inpatient oncology setting and adult ambulatory hematologic malignancy setting. A chemotherapy regimen validation process was defined as a result of the pilot project. Over a 27-week pilot period, 32 chemotherapy regimens were validated using the process we developed. Results of the study suggest that by validating chemotherapy regimens, the amount of time spent by pharmacists in daily chemotherapy review was decreased. In addition, the number of pharmacist modifications required to make regimens complete and accurate were decreased. Both physician and pharmacy disciplines showed improved satisfaction and confidence levels with chemotherapy regimens after implementation of the validation system. Chemotherapy regimen validation required a considerable amount of planning and time but resulted in increased pharmacist efficiency and improved provider confidence and satisfaction. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Implementation of an evidence-based order set to impact initial antibiotic time intervals in adult febrile neutropenia.

    PubMed

    Best, Janie T; Frith, Karen; Anderson, Faye; Rapp, Carla Gene; Rioux, Lisa; Ciccarello, Christina

    2011-11-01

    To evaluate the impact of the implementation of a standardized order set on the time interval in initiation of antibiotic therapy for adult patients with cancer and febrile neutropenia. Practice change. The oncology unit of an urban hospital in the south-eastern United States. Adult patients with cancer and febrile neutropenia admitted six months prior to (n = 30) or during the three months following (n = 23) implementation of the order set. Literature regarding febrile neutropenia, use of order sets, and change process was reviewed. In addition, a retrospective and concurrent chart review was conducted for adult patients admitted with febrile neutropenia. Time intervals were analyzed using SPSS® software, version 18. Initial antibiotic times, order-set use, and length of stay. An overall reduction in time intervals for initiation of antibiotic therapy was observed for presentation (t = 2.25; degrees of freedom [df] = 37; p = 0.031) and order (t = 2.67; df = 40.17; p = 0.012) to antibiotic administration, with an order-set usage of 31% in the inpatient unit and 71% in the emergency department. Findings in the presence of low order-set usage suggest that staff education and placement of the order-set antibiotics in unit-based medication dispensing machines helped reduce time intervals for initial antibiotic therapy. The use of an evidence-based approach to nursing care is essential to achieving the best outcomes for patients with febrile neutropenia. Incorporation of current evidence into an order set to guide clinical practice and comprehensive nurse, pharmacy, and physician education are needed for the successful implementation of evidence-based practice changes.

  9. Molecular Phylogeny of the Widely Distributed Marine Protists, Phaeodaria (Rhizaria, Cercozoa).

    PubMed

    Nakamura, Yasuhide; Imai, Ichiro; Yamaguchi, Atsushi; Tuji, Akihiro; Not, Fabrice; Suzuki, Noritoshi

    2015-07-01

    Phaeodarians are a group of widely distributed marine cercozoans. These plankton organisms can exhibit a large biomass in the environment and are supposed to play an important role in marine ecosystems and in material cycles in the ocean. Accurate knowledge of phaeodarian classification is thus necessary to better understand marine biology, however, phylogenetic information on Phaeodaria is limited. The present study analyzed 18S rDNA sequences encompassing all existing phaeodarian orders, to clarify their phylogenetic relationships and improve their taxonomic classification. The monophyly of Phaeodaria was confirmed and strongly supported by phylogenetic analysis with a larger data set than in previous studies. The phaeodarian clade contained 11 subclades which generally did not correspond to the families and orders of the current classification system. Two families (Challengeriidae and Aulosphaeridae) and two orders (Phaeogromida and Phaeocalpida) are possibly polyphyletic or paraphyletic, and consequently the classification needs to be revised at both the family and order levels by integrative taxonomy approaches. Two morphological criteria, 1) the scleracoma type and 2) its surface structure, could be useful markers at the family level. Copyright © 2015 Elsevier GmbH. All rights reserved.

  10. Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers.

    PubMed

    Mougiakakou, Stavroula G; Valavanis, Ioannis K; Nikita, Alexandra; Nikita, Konstantina S

    2007-09-01

    The aim of the present study is to define an optimally performing computer-aided diagnosis (CAD) architecture for the classification of liver tissue from non-enhanced computed tomography (CT) images into normal liver (C1), hepatic cyst (C2), hemangioma (C3), and hepatocellular carcinoma (C4). To this end, various CAD architectures, based on texture features and ensembles of classifiers (ECs), are comparatively assessed. Number of regions of interests (ROIs) corresponding to C1-C4 have been defined by experienced radiologists in non-enhanced liver CT images. For each ROI, five distinct sets of texture features were extracted using first order statistics, spatial gray level dependence matrix, gray level difference method, Laws' texture energy measures, and fractal dimension measurements. Two different ECs were constructed and compared. The first one consists of five multilayer perceptron neural networks (NNs), each using as input one of the computed texture feature sets or its reduced version after genetic algorithm-based feature selection. The second EC comprised five different primary classifiers, namely one multilayer perceptron NN, one probabilistic NN, and three k-nearest neighbor classifiers, each fed with the combination of the five texture feature sets or their reduced versions. The final decision of each EC was extracted by using appropriate voting schemes, while bootstrap re-sampling was utilized in order to estimate the generalization ability of the CAD architectures based on the available relatively small-sized data set. The best mean classification accuracy (84.96%) is achieved by the second EC using a fused feature set, and the weighted voting scheme. The fused feature set was obtained after appropriate feature selection applied to specific subsets of the original feature set. The comparative assessment of the various CAD architectures shows that combining three types of classifiers with a voting scheme, fed with identical feature sets obtained after appropriate feature selection and fusion, may result in an accurate system able to assist differential diagnosis of focal liver lesions from non-enhanced CT images.

  11. Patient Compliance With Electronic Patient Reported Outcomes Following Shoulder Arthroscopy.

    PubMed

    Makhni, Eric C; Higgins, John D; Hamamoto, Jason T; Cole, Brian J; Romeo, Anthony A; Verma, Nikhil N

    2017-11-01

    To determine the patient compliance in completing electronically administered patient-reported outcome (PRO) scores following shoulder arthroscopy, and to determine if dedicated research assistants improve patient compliance. Patients undergoing arthroscopic shoulder surgery from January 1, 2014, to December 31, 2014, were prospectively enrolled into an electronic data collection system with retrospective review of compliance data. A total of 143 patients were included in this study; 406 patients were excluded (for any or all of the following reasons, such as incomplete follow-up, inaccessibility to the order sets, and inability to complete the order sets). All patients were assigned an order set of PROs through an electronic reporting system, with order sets to be completed prior to surgery, as well as 6 and 12 months postoperatively. Compliance rates of form completion were documented. Patients who underwent arthroscopic anterior and/or posterior stabilization were excluded. The average age of the patients was 53.1 years, ranging from 20 to 83. Compliance of form completion was highest preoperatively (76%), and then dropped subsequently at 6 months postoperatively (57%) and 12 months postoperatively (45%). Use of research assistants improved compliance by approximately 20% at each time point. No differences were found according to patient gender and age group. Of those completing forms, a majority completed forms at home or elsewhere prior to returning to the office for the clinic visit. Electronic administration of PRO may decrease the amount of time required in the office setting for PRO completion by patients. This may be mutually beneficial to providers and patients. It is unclear if an electronic system improves patient compliance in voluntary completion PRO. Compliance rates at final follow-up remain a concern if data are to be used for establishing quality or outcome metrics. Level IV, case series. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. Estimating the intrinsic limit of the Feller-Peterson-Dixon composite approach when applied to adiabatic ionization potentials in atoms and small molecules

    NASA Astrophysics Data System (ADS)

    Feller, David

    2017-07-01

    Benchmark adiabatic ionization potentials were obtained with the Feller-Peterson-Dixon (FPD) theoretical method for a collection of 48 atoms and small molecules. In previous studies, the FPD method demonstrated an ability to predict atomization energies (heats of formation) and electron affinities well within a 95% confidence level of ±1 kcal/mol. Large 1-particle expansions involving correlation consistent basis sets (up to aug-cc-pV8Z in many cases and aug-cc-pV9Z for some atoms) were chosen for the valence CCSD(T) starting point calculations. Despite their cost, these large basis sets were chosen in order to help minimize the residual basis set truncation error and reduce dependence on approximate basis set limit extrapolation formulas. The complementary n-particle expansion included higher order CCSDT, CCSDTQ, or CCSDTQ5 (coupled cluster theory with iterative triple, quadruple, and quintuple excitations) corrections. For all of the chemical systems examined here, it was also possible to either perform explicit full configuration interaction (CI) calculations or to otherwise estimate the full CI limit. Additionally, corrections associated with core/valence correlation, scalar relativity, anharmonic zero point vibrational energies, non-adiabatic effects, and other minor factors were considered. The root mean square deviation with respect to experiment for the ionization potentials was 0.21 kcal/mol (0.009 eV). The corresponding level of agreement for molecular enthalpies of formation was 0.37 kcal/mol and for electron affinities 0.20 kcal/mol. Similar good agreement with experiment was found in the case of molecular structures and harmonic frequencies. Overall, the combination of energetic, structural, and vibrational data (655 comparisons) reflects the consistent ability of the FPD method to achieve close agreement with experiment for small molecules using the level of theory applied in this study.

  13. A quantum causal discovery algorithm

    NASA Astrophysics Data System (ADS)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  14. Individualization through standardization: electronic orders for subcutaneous insulin in the hospital.

    PubMed

    Kennihan, Mary; Zohra, Tatheer; Devi, Radha; Srinivasan, Chitra; Diaz, Josefina; Howard, Bradley S; Braithwaite, Susan S

    2012-01-01

    The objective was to design electronic order sets that would promote safe, effective, and individualized order entry for subcutaneous insulin in the hospital, based on a review of best practices. Saint Francis Hospital in Evanston, Illinois, a community teaching hospital, was selected as the pilot site for 6 hospitals in the Health Care System to introduce an electronic medical record. Articles dealing with management of hospital hyperglycemia, medical order entry systems, and patient safety were reviewed selectively. In the published literature on institutional glycemic management programs and insulin order sets, features were identified that improve safety and effectiveness of subcutaneous insulin therapy. Subcutaneous electronic insulin order sets were created, designated in short: "patients eating", "patients not eating", and "patients receiving overnight enteral feedings." Together with an option for free text entry, menus of administration instructions were designed within each order set that were applicable to specific insulin orders and expressed in standardized language, such as "hold if tube feeds stop" or "do not withhold." Two design features are advocated for electronic order sets for subcutaneous insulin that will both standardize care and protect individualization. First, within the order sets, the glycemic management plan should be matched to the carbohydrate exposure of the patients, with juxtaposition of appropriate orders for both glucose monitoring and insulin. Second, in order to convey precautions of insulin use to pharmacy and nursing staff, the prescriber must be able to attach administration instructions to specific insulin orders.

  15. The transcriptomic fingerprint of glucoamylase over-expression in Aspergillus niger

    PubMed Central

    2012-01-01

    Background Filamentous fungi such as Aspergillus niger are well known for their exceptionally high capacity for secretion of proteins, organic acids, and secondary metabolites and they are therefore used in biotechnology as versatile microbial production platforms. However, system-wide insights into their metabolic and secretory capacities are sparse and rational strain improvement approaches are therefore limited. In order to gain a genome-wide view on the transcriptional regulation of the protein secretory pathway of A. niger, we investigated the transcriptome of A. niger when it was forced to overexpression the glaA gene (encoding glucoamylase, GlaA) and secrete GlaA to high level. Results An A. niger wild-type strain and a GlaA over-expressing strain, containing multiple copies of the glaA gene, were cultivated under maltose-limited chemostat conditions (specific growth rate 0.1 h-1). Elevated glaA mRNA and extracellular GlaA levels in the over-expressing strain were accompanied by elevated transcript levels from 772 genes and lowered transcript levels from 815 genes when compared to the wild-type strain. Using GO term enrichment analysis, four higher-order categories were identified in the up-regulated gene set: i) endoplasmic reticulum (ER) membrane translocation, ii) protein glycosylation, iii) vesicle transport, and iv) ion homeostasis. Among these, about 130 genes had predicted functions for the passage of proteins through the ER and those genes included target genes of the HacA transcription factor that mediates the unfolded protein response (UPR), e.g. bipA, clxA, prpA, tigA and pdiA. In order to identify those genes that are important for high-level secretion of proteins by A. niger, we compared the transcriptome of the GlaA overexpression strain of A. niger with six other relevant transcriptomes of A. niger. Overall, 40 genes were found to have either elevated (from 36 genes) or lowered (from 4 genes) transcript levels under all conditions that were examined, thus defining the core set of genes important for ensuring high protein traffic through the secretory pathway. Conclusion We have defined the A. niger genes that respond to elevated secretion of GlaA and, furthermore, we have defined a core set of genes that appear to be involved more generally in the intensified traffic of proteins through the secretory pathway of A. niger. The consistent up-regulation of a gene encoding the acetyl-coenzyme A transporter suggests a possible role for transient acetylation to ensure correct folding of secreted proteins. PMID:23237452

  16. Development and Implementation of a Learning Object Repository for French Teaching and Learning: Issues and Promises

    ERIC Educational Resources Information Center

    Caws, Catherine

    2008-01-01

    This paper discusses issues surrounding the development of a learning object repository (FLORE) for teaching and learning French at the postsecondary level. An evaluation based on qualitative and quantitative data was set up in order to better assess how second-language (L2) students in French perceived the integration of this new repository into…

  17. Environmental Education Evaluation at the School: An Example in Sao Nicolau Island, Cape Verde

    ERIC Educational Resources Information Center

    Graziani, Pietro; Cabral, Daniel; Santana, Nelson

    2013-01-01

    Monte Gordo Natural Park (MGNP) is part of the Cape Verde (CV) Protected Areas National Network. In order to create an effective Environmental Education (EE) curriculum, it is crucial to first identify the level of environmental knowledge of both teachers and students. In 2007 we implemented a set of four surveys to students and educators and…

  18. An Introduction to Biological Modeling Using Coin Flips to Predict the Outcome of a Diffusion Activity

    ERIC Educational Resources Information Center

    Butcher, Greg Q.; Rodriguez, Juan; Chirhart, Scott; Messina, Troy C.

    2016-01-01

    In order to increase students' awareness for and comfort with mathematical modeling of biological processes, and increase their understanding of diffusion, the following lab was developed for use in 100-level, majors/non-majors biology and neuroscience courses. The activity begins with generation of a data set that uses coin-flips to replicate…

  19. Graphite Girls in a Gigabyte World: Managing the World Wide Web in 700 Square Feet

    ERIC Educational Resources Information Center

    Ogletree, Tamra; Saurino, Penelope; Johnson, Christie

    2009-01-01

    Our action research project examined the on-task and off-task behaviors of university-level student, use of wireless laptops in face-to-face classes in order to establish rules of wireless laptop etiquette in classroom settings. Participants in the case study of three university classrooms included undergraduate, graduate, and doctoral students.…

  20. 78 FR 27271 - Self-Regulatory Organizations; BOX Options Exchange LLC; Order Granting Approval of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... symbols (SPYJ) than the corresponding standard options on SPY.\\8\\ In addition, the Exchange proposes to list Jumbo SPY Options for all expirations applicable to standard options on SPY,\\9\\ and proposes that strike prices for Jumbo SPY Options be set at the same level as standard options on SPY.\\10\\ Bids and...

  1. The island dynamics model on parallel quadtree grids

    NASA Astrophysics Data System (ADS)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  2. Scaling Relations for Intercalation Induced Damage in Electrodes

    DOE PAGES

    Chen, Chien-Fan; Barai, Pallab; Smith, Kandler; ...

    2016-04-02

    Mechanical degradation, owing to intercalation induced stress and microcrack formation, is a key contributor to the electrode performance decay in lithium-ion batteries (LIBs). The stress generation and formation of microcracks are caused by the solid state diffusion of lithium in the active particles. Here in this work, scaling relations are constructed for diffusion induced damage in intercalation electrodes based on an extensive set of numerical experiments with a particle-level description of microcrack formation under disparate operating and cycling conditions, such as temperature, particle size, C-rate, and drive cycle. The microcrack formation and evolution in active particles is simulated based onmore » a stochastic methodology. A reduced order scaling law is constructed based on an extensive set of data from the numerical experiments. The scaling relations include combinatorial constructs of concentration gradient, cumulative strain energy, and microcrack formation. Lastly, the reduced order relations are further employed to study the influence of mechanical degradation on cell performance and validated against the high order model for the case of damage evolution during variable current vehicle drive cycle profiles.« less

  3. Comparing supervised learning techniques on the task of physical activity recognition.

    PubMed

    Dalton, A; OLaighin, G

    2013-01-01

    The objective of this study was to compare the performance of base-level and meta-level classifiers on the task of physical activity recognition. Five wireless kinematic sensors were attached to each subject (n = 25) while they completed a range of basic physical activities in a controlled laboratory setting. Subjects were then asked to carry out similar self-annotated physical activities in a random order and in an unsupervised environment. A combination of time-domain and frequency-domain features were extracted from the sensor data including the first four central moments, zero-crossing rate, average magnitude, sensor cross-correlation, sensor auto-correlation, spectral entropy and dominant frequency components. A reduced feature set was generated using a wrapper subset evaluation technique with a linear forward search and this feature set was employed for classifier comparison. The meta-level classifier AdaBoostM1 with C4.5 Graft as its base-level classifier achieved an overall accuracy of 95%. Equal sized datasets of subject independent data and subject dependent data were used to train this classifier and high recognition rates could be achieved without the need for user specific training. Furthermore, it was found that an accuracy of 88% could be achieved using data from the ankle and wrist sensors only.

  4. Making Sense of Clinical Practice: Order Set Design Strategies in CPOE

    PubMed Central

    Novak, Laurie L.

    2007-01-01

    A case study was conducted during the customization phase of a commercial CPOE system at a multi-hospital, academic health system. The study focused on the development of order sets. Three distinct approaches to order set development were observed: Empirical, Local Consensus and Departmental. The three approaches are first described and then examined using the framework of sensemaking. Different approaches to sensemaking in the context of order set development reflect variations in sources of knowledge related to the standardization of care. PMID:18693900

  5. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  6. System considerations for detection and tracking of small targets using passive sensors

    NASA Astrophysics Data System (ADS)

    DeBell, David A.

    1991-08-01

    Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papajak, Ewa; Truhlar, Donald G.

    We present sets of convergent, partially augmented basis set levels corresponding to subsets of the augmented “aug-cc-pV(n+d)Z” basis sets of Dunning and co-workers. We show that for many molecular properties a basis set fully augmented with diffuse functions is computationally expensive and almost always unnecessary. On the other hand, unaugmented cc-pV(n+d)Z basis sets are insufficient for many properties that require diffuse functions. Therefore, we propose using intermediate basis sets. We developed an efficient strategy for partial augmentation, and in this article, we test it and validate it. Sequentially deleting diffuse basis functions from the “aug” basis sets yields the “jul”,more » “jun”, “may”, “apr”, etc. basis sets. Tests of these basis sets for Møller-Plesset second-order perturbation theory (MP2) show the advantages of using these partially augmented basis sets and allow us to recommend which basis sets offer the best accuracy for a given number of basis functions for calculations on large systems. Similar truncations in the diffuse space can be performed for the aug-cc-pVxZ, aug-cc-pCVxZ, etc. basis sets.« less

  8. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  9. Minutia Tensor Matrix: A New Strategy for Fingerprint Matching

    PubMed Central

    Fu, Xiang; Feng, Jufu

    2015-01-01

    Establishing correspondences between two minutia sets is a fundamental issue in fingerprint recognition. This paper proposes a new tensor matching strategy. First, the concept of minutia tensor matrix (simplified as MTM) is proposed. It describes the first-order features and second-order features of a matching pair. In the MTM, the diagonal elements indicate similarities of minutia pairs and non-diagonal elements indicate pairwise compatibilities between minutia pairs. Correct minutia pairs are likely to establish both large similarities and large compatibilities, so they form a dense sub-block. Minutia matching is then formulated as recovering the dense sub-block in the MTM. This is a new tensor matching strategy for fingerprint recognition. Second, as fingerprint images show both local rigidity and global nonlinearity, we design two different kinds of MTMs: local MTM and global MTM. Meanwhile, a two-level matching algorithm is proposed. For local matching level, the local MTM is constructed and a novel local similarity calculation strategy is proposed. It makes full use of local rigidity in fingerprints. For global matching level, the global MTM is constructed to calculate similarities of entire minutia sets. It makes full use of global compatibility in fingerprints. Proposed method has stronger description ability and better robustness to noise and nonlinearity. Experiments conducted on Fingerprint Verification Competition databases (FVC2002 and FVC2004) demonstrate the effectiveness and the efficiency. PMID:25822489

  10. Improving Comfort in Hot-Humid Climates with a Whole-House Dehumidifier, Windermere, Florida (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-11-01

    Maintaining comfort in a home can be challenging in hot-humid climates. At the common summer temperature set point of 75 degrees F, the perceived air temperature can vary by 11 degrees F because higher indoor humidity reduces comfort. Often the air conditioner (AC) thermostat set point is lower than the desirable cooling level to try to increase moisture removal so that the interior air is not humid or "muggy." However, this method is not always effective in maintaining indoor relative humidity (RH) or comfort. In order to quantify the performance of a combined whole-house dehumidifier (WHD) AC system, researchers frommore » the U.S. Department of Energy's Building America team Consortium of Advanced Residential Buildings (CARB) monitored the operation of two Lennox AC systems coupled with a Honeywell DH150 TrueDRY whole-house dehumidifier for a six-month period. By using a WHD to control moisture levels (latent cooling) and optimizing a central AC to control temperature (sensible cooling), improvements in comfort can be achieved while reducing utility costs. Indoor comfort for this study was defined as maintaining indoor conditions at below 60% RH and a humidity ratio of 0.012 lbm/lbm while at common dry bulb set point temperatures of 74 degrees -80 degrees F. In addition to enhanced comfort, controlling moisture to these levels can reduce the risk of other potential issues such as mold growth, pests, and building component degradation. Because a standard AC must also reduce dry bulb air temperature in order to remove moisture, a WHD is typically needed to support these latent loads when sensible heat removal is not desired.« less

  11. Use of thyroid-stimulating hormone tests for identifying primary hypothyroidism in family medicine patients.

    PubMed

    Birk-Urovitz, Elizabeth; Elisabeth Del Giudice, M; Meaney, Christopher; Grewal, Karan

    2017-09-01

    To assess the use of thyroid-stimulating hormone (TSH) tests for identifying primary hypothyroidism in 2 academic family medicine settings. Descriptive study involving a retrospective electronic chart review of family medicine patients who underwent TSH testing. Two academic family practice sites: one site is within a tertiary hospital in Toronto, Ont, and the other is within a community hospital in Newmarket, Ont. A random sample of 205 adult family medicine patients who had 1 or more TSH tests for identifying potential primary hypothyroidism between July 1, 2009, and September 15, 2013. Exclusion criteria included a previous diagnosis of any thyroid condition or abnormality, as well as pregnancy or recent pregnancy within the year preceding the study period. The proportion of normal TSH test results and the proportion of TSH tests that did not conform to test-ordering guidelines. Of the 205 TSH test results, 200 (97.6%, 95% CI 94.4% to 99.2%) showed TSH levels within the normal range. All 5 patients with abnormal TSH test results had TSH levels above the upper reference limits. Nearly one-quarter (22.4%, 95% CI 16.9% to 28.8%) of tests did not conform to test-ordering guidelines. All TSH tests classified as not conforming to test-ordering guidelines showed TSH levels within normal limits. There was a significant difference ( P < .001) between the proportions of nonconforming TSH tests at the tertiary site (14.3%, 95% CI 8.2% to 22.5%) and the community site (31.0%, 95% CI 22.1% to 41.0%). Preliminary analyses examining which variables might be associated with abnormal TSH levels showed that only muscle cramps or myalgia ( P = .0286) and a history of an autoimmune disorder ( P = .0623) met or approached statistical significance. In this study, the proportion of normal TSH test results in the context of primary hypothyroidism case finding and screening was high, and the overall proportion of TSH tests that did not conform to test-ordering guidelines was relatively high as well. These results highlight a need for more consistent TSH test-ordering guidelines for primary hypothyroidism and perhaps some educational interventions to help curtail the overuse of TSH tests in the family medicine setting. Copyright© the College of Family Physicians of Canada.

  12. Getting hip to vitamin D: a hospitalist project for improving the assessment and treatment of vitamin D deficiency in elderly patients with hip fracture.

    PubMed

    Stephens, John R; Williams, Christine; Edwards, Eric; Ossman, Paul; DeWalt, Darren A

    2014-11-01

    Vitamin D deficiency is common in elderly patients with hip fracture, and clinical practice guidelines recommend screening this population. Our hospitalist group cares for all patients admitted with hip fracture, yet lacked a standardized approach to screening for and treating vitamin D deficiency in this population. To standardize and improve the assessment and treatment of vitamin D deficiency in elderly patients with hip fracture. Quality improvement implementation. Tertiary academic hospital. Adults age >50 years with hip fracture. We implemented a computerized hip fracture order set with preselected orders for 25-OH vitamin D level and initial supplementation with 1000 IU/day of vitamin D. We presented a review of the literature and performance data to our hospitalist group. Percentage of patients with acute hip fracture screened for vitamin D deficiency and percentage of deficient or insufficient patients discharged on recommended dose of vitamin D (50,000 IU/wk if level <20 ng/mL). The percentage of patients screened for vitamin D deficiency improved from 37.2% (n = 196) before implementation to 93.5% (n = 107) after (P < 0.001). The percentage of deficient or insufficient patients discharged on the recommended vitamin D dose improved from 40.9% to 68.0% (P = 0.008). The prevalence of vitamin D deficiency or insufficiency (25-OH vitamin D level <30 ng/mL) was 50.0%. Simple interventions, consisting of a change in computerized order set and presentation of evidence and data from group practice, led to significant improvement in the assessment and treatment of vitamin D deficiency in elderly patients with hip fracture. © 2014 Society of Hospital Medicine.

  13. Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets

    NASA Astrophysics Data System (ADS)

    Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.

    2011-10-01

    The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.

  14. A level set method for determining critical curvatures for drainage and imbibition.

    PubMed

    Prodanović, Masa; Bryant, Steven L

    2006-12-15

    An accurate description of the mechanics of pore level displacement of immiscible fluids could significantly improve the predictions from pore network models of capillary pressure-saturation curves, interfacial areas and relative permeability in real porous media. If we assume quasi-static displacement, at constant pressure and surface tension, pore scale interfaces are modeled as constant mean curvature surfaces, which are not easy to calculate. Moreover, the extremely irregular geometry of natural porous media makes it difficult to evaluate surface curvature values and corresponding geometric configurations of two fluids. Finally, accounting for the topological changes of the interface, such as splitting or merging, is nontrivial. We apply the level set method for tracking and propagating interfaces in order to robustly handle topological changes and to obtain geometrically correct interfaces. We describe a simple but robust model for determining critical curvatures for throat drainage and pore imbibition. The model is set up for quasi-static displacements but it nevertheless captures both reversible and irreversible behavior (Haines jump, pore body imbibition). The pore scale grain boundary conditions are extracted from model porous media and from imaged geometries in real rocks. The method gives quantitative agreement with measurements and with other theories and computational approaches.

  15. Simulated families: A test for different methods of family identification

    NASA Technical Reports Server (NTRS)

    Bendjoya, Philippe; Cellino, Alberto; Froeschle, Claude; Zappala, Vincenzo

    1992-01-01

    A set of families generated in fictitious impact events (leading to a wide range of 'structure' in the orbital element space have been superimposed to various backgrounds of different densities in order to investigate the efficiency and the limitations of the methods used by Zappala et al. (1990) and by Bendjoya et al. (1990) for identifying asteroid families. In addition, an evaluation of the expected interlopers at different significance levels and the possibility of improving the definition of the level of maximum significant of a given family were analyzed.

  16. A Multi-Center Space Data System Prototype Based on CCSDS Standards

    NASA Technical Reports Server (NTRS)

    Rich, Thomas M.

    2016-01-01

    Deep space missions beyond earth orbit will require new methods of data communications in order to compensate for increasing RF propagation delay. The Consultative Committee for Space Data Systems (CCSDS) standard protocols Spacecraft Monitor & Control (SM&C), Asynchronous Message Service (AMS), and Delay/Disruption Tolerant Networking (DTN) provide such a method. The maturity level of this protocol set is, however, insufficient for mission inclusion at this time. This prototype is intended to provide experience which will raise the Technical Readiness Level (TRL) of these protocols..

  17. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  18. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  19. Impact of Implementing a Wiki to Develop Structured Electronic Order Sets on Physicians' Intention to Use Wiki-Based Order Sets

    PubMed Central

    Beaupré, Pierre; Bégin, Laura; Dupuis, Audrey; Côté, Mario; Légaré, France

    2016-01-01

    Background Wikis have the potential to promote best practices in health systems by sharing order sets with a broad community of stakeholders. However, little is known about the impact of using a wiki on clinicians’ intention to use wiki-based order sets. Objective The aims of this study were: (1) to describe the use of a wiki to create structured order sets for a single emergency department; (2) to evaluate whether the use of this wiki changed emergency physicians’ future intention to use wiki-based order sets; and (3) to understand the impact of using the wiki on the behavioral determinants for using wiki-based order sets. Methods This was a pre/post-intervention mixed-methods study conducted in one hospital in Lévis, Quebec. The intervention was comprised of receiving access to and being motivated by the department head to use a wiki for 6 months to create electronic order sets designed to be used in a computer physician order entry system. Before and after our intervention, we asked participants to complete a previously validated questionnaire based on the Theory of Planned Behavior. Our primary outcome was the intention to use wiki-based order sets in clinical practice. We also assessed participants’ attitude, perceived behavioral control, and subjective norm to use wiki-based order sets. Paired pre- and post-Likert scores were compared using Wilcoxon signed-rank tests. The post-questionnaire also included open-ended questions concerning participants’ comments about the wiki, which were then classified into themes using an existing taxonomy. Results Twenty-eight emergency physicians were enrolled in the study (response rate: 100%). Physicians’ mean intention to use a wiki-based reminder was 5.42 (SD 1.04) before the intervention, and increased to 5.81 (SD 1.25) on a 7-point Likert scale (P=.03) after the intervention. Participants’ attitude towards using a wiki-based order set also increased from 5.07 (SD 0.90) to 5.57 (SD 0.88) (P=.003). Perceived behavioral control and subjective norm did not change. Easier information sharing was the most frequently positive impact raised. In order of frequency, the three most important facilitators reported were: ease of use, support from colleagues, and promotion by the departmental head. Although participants did not mention any perceived negative impacts, they raised the following barriers in order of frequency: poor organization of information, slow computers, and difficult wiki access. Conclusions Emergency physicians’ intention and attitude to use wiki-based order sets increased after having access to and being motivated to use a wiki for 6 months. Future studies need to explore if this increased intention will translate into sustained actual use and improve patient care. Certain barriers need to be addressed before implementing a wiki for use on a larger scale. PMID:27189046

  20. Depression and experience of vision loss in group of adults in rehabilitation setting: mixed-methods pilot study.

    PubMed

    Senra, Hugo; Vieira, Cristina R; Nicholls, Elizabeth G; Leal, Isabel

    2013-01-01

    There is a paucity of literature regarding the relationship between the experience of vision loss and depression. Therefore, the current pilot study aimed to explore whether significant differences existed in levels of depression between adults with different vision loss experiences. A group of adults aged between 20 and 65 yr old with irreversible vision loss in a rehabilitation setting was interviewed. Semistructured interviews were conducted in order to explore patients' experience of vision loss. The Center for Epidemiologic Studies Depression Scale (CES-D) was used to assess depressive levels; 39.5% (n = 15) of patients met CES-D criteria for depression. In addition, higher levels of depression (p < 0.05) were identified in patients whose interviews revealed greater self-awareness of impairment, inadequate social support, and longer rehabilitation stay. Current findings draw attention to variables such as self-awareness of impairment and perceived social support and suggest that depression following vision loss may be related to patients' emotional experiences of impairment and adjustment processes.

  1. Architected squirt-flow materials for energy dissipation

    NASA Astrophysics Data System (ADS)

    Cohen, Tal; Kurzeja, Patrick; Bertoldi, Katia

    2017-12-01

    In the present study we explore material architectures that lead to enhanced dissipation properties by taking advantage of squirt-flow - a local flow mechanism triggered by heterogeneities at the pore level. While squirt-flow is a known dominant source of dissipation and seismic attenuation in fluid saturated geological materials, we study its untapped potential to be incorporated in highly deformable elastic materials with embedded fluid-filled cavities for future engineering applications. An analytical investigation, that isolates the squirt-flow mechanism from other potential dissipation mechanisms and considers an idealized setting, predicts high theoretical levels of dissipation achievable by squirt-flow and establishes a set of guidelines for optimal dissipation design. Particular architectures are then investigated via numerical simulations showing that a careful design of the internal voids can lead to an increase of dissipation levels by an order of magnitude, compared with equivalent homogeneous void distributions. Therefore, we suggest squirt-flow as a promising mechanism to be incorporated in future architected materials to effectively and reversibly dissipate energy.

  2. Part-set cueing impairment & facilitation in semantic memory.

    PubMed

    Kelley, Matthew R; Parihar, Sushmeena A

    2018-01-19

    The present study explored the influence of part-set cues in semantic memory using tests of "free" recall, reconstruction of order, and serial recall. Nine distinct categories of information were used (e.g., Zodiac signs, Harry Potter books, Star Wars films, planets). The results showed part-set cueing impairment for all three "free" recall sets, whereas part-set cueing facilitation was evident for five of the six ordered sets. Generally, the present results parallel those often observed across episodic tasks, which could indicate that similar mechanisms contribute to part-set cueing effects in both episodic and semantic memory. A novel anchoring explanation of part-set cueing facilitation in order and spatial tasks is provided.

  3. Computer-Aided Argument Mapping in an EFL Setting: Does Technology Precede Traditional Paper and Pencil Approach in Developing Critical Thinking?

    ERIC Educational Resources Information Center

    Eftekhari, Maryam; Sotoudehnama, Elaheh; Marandi, S. Susan

    2016-01-01

    Developing higher-order critical thinking skills as one of the central objectives of education has been recently facilitated via software packages. Whereas one such technology as computer-aided argument mapping is reported to enhance levels of critical thinking (van Gelder 2001), its application as a pedagogical tool in English as a Foreign…

  4. The 3d Rydberg (3A2) electronic state observed by Herzberg and Shoosmith for methylene

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yukio; Schaefer, Henry F., III

    1997-06-01

    In 1959 and 1961 Herzberg and Shoosmith reported the vacuum ultraviolet spectrum of the triplet state of CH2. The present study focuses on a characterization of the upper state, the 3d Rydberg (3A2) state, observed at 1415 Å. The theoretical interpretation of these experiments is greatly complicated by the presence of a lower-lying 3A2 valence state with a very small equilibrium bond angle. Ab initio electronic structure methods involving self-consistent-field (SCF), configuration interaction with single and double excitations (CISD), complete active space (CAS) SCF, state-averaged (SA) CASSCF, coupled cluster with single and double excitations (CCSD), CCSD with perturbative triple excitations [CCSD(T)], CASSCF second-order (SO) CI, and SACASSCF-SOCI have been employed with six distinct basis sets. With the largest basis set, triple zeta plus triple polarization with two sets of higher angular momentum functions and three sets of diffuse functions TZ3P(2 f,2d)+3diff, the CISD level of theory predicts the equilibrium geometry of the 3d Rydberg (3A2) state to be re=1.093 Å and θe=141.3 deg. With the same basis set the energy (Te value) of the 3d Rydberg state relative to the ground (X˜ 3B1) state has been determined to be 201.6 kcal mol-1 (70 500 cm-1) at the CCSD (T) level, 200.92kcal mol-1 (70 270 cm-1) at the CASSCF-SOCI level, and 200.89kcal mol-1 (70 260 cm-1) at the SACASSCF-SOCI level of theory. These predictions are in excellent agreement with the experimental T0 value of 201.95 kcalmol-1 (70 634 cm-1) reported by Herzberg.

  5. Application of level set method to optimal vibration control of plate structures

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Khajepour, A.; Esmailzadeh, E.

    2013-02-01

    Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.

  6. Textural content in 3T MR: an image-based marker for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday

    2005-04-01

    In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.

  7. Voyageurs National Park: Water-level regulation and effects on water quality and aquatic biology

    USGS Publications Warehouse

    Christensen, Victoria G.; Maki, Ryan P.; LeDuc, Jaime F.

    2018-01-01

    Following dam installations in the remote Rainy Lake Basin during the early 1900s, water-level fluctuations were considered extreme (1914–1949) compared to more natural conditions. In 1949, the International Joint Commission (IJC), which sets rules governing dam operation on waters shared by the United States and Canada, established the first rule curves to regulate water levels on these waterbodies. However, rule curves established prior to 2000 were determined to be detrimental to the ecosystem. Therefore, the IJC implemented an order in 2000 to change rule curves and to restore a more natural water regime. After 2000, measured chlorophyll-a concentrations in the two most eutrophic water bodies decreased whereas concentrations in oligotrophic lakes did not show significant water-quality differences. Fish mercury data were inconclusive, due to the variation in water levels and fish mercury concentrations, but can be used by the IJC as part of a long term data set.

  8. Research on multi-level decision game strategy of electricity sales market considering ETS and block chain

    NASA Astrophysics Data System (ADS)

    Liu, Jinjie

    2017-08-01

    In order to fully consider the impact of future policies and technologies on the electricity sales market, improve the efficiency of electricity market operation, realize the dual goal of power reform and energy saving and emission reduction, this paper uses multi-level decision theory to put forward the double-layer game model under the consideration of ETS and block chain. We set the maximization of electricity sales profit as upper level objective and establish a game strategy model of electricity purchase; while we set maximization of user satisfaction as lower level objective and build a choice behavior model based on customer satisfaction. This paper applies the strategy to the simulation of a sales company's transaction, and makes a horizontal comparison of the same industry competitors as well as a longitudinal comparison of game strategies considering different factors. The results show that Double-layer game model is reasonable and effective, it can significantly improve the efficiency of the electricity sales companies and user satisfaction, while promoting new energy consumption and achieving energy-saving emission reduction.

  9. Sequence stratigraphy of the Lower Cenomanian Bahariya Formation, Bahariya Oasis, Western Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Catuneanu, O.; Khalifa, M. A.; Wanas, H. A.

    2006-08-01

    The Lower Cenomanian Bahariya Formation corresponds to a second-order depositional sequence that formed within a continental shelf setting under relatively low-rate conditions of positive accommodation (< 200 m during 3-6 My). This overall trend of base-level rise was interrupted by three episodes of base-level fall that resulted in the formation of third-order sequence boundaries. These boundaries are represented by subaerial unconformities (replaced or not by younger transgressive wave ravinement surfaces), and subdivide the Bahariya Formation into four third-order depositional sequences. The construction of the sequence stratigraphic framework of the Bahariya Formation is based on the lateral and vertical changes between shelf, subtidal, coastal and fluvial facies, as well as on the nature of contacts that separate them. The internal (third-order) sequence boundaries are associated with incised valleys, which explain (1) significant lateral changes in the thickness of incised valley fill deposits, (2) the absence of third-order highstand and even transgressive systems tracts in particular areas, and (3) the abrupt facies shifts that may occur laterally over relatively short distances. Within each sequence, the concepts of lowstand, transgressive and highstand systems tracts are used to explain the observed lateral and vertical facies variability. This case study demonstrates the usefulness of sequence stratigraphic analysis in understanding the architecture and stacking patterns of the preserved rock record, and helps to identify 13 stages in the history of base-level changes that marked the evolution of the Bahariya Oasis region during the Early Cenomanian.

  10. Implementation of a Goal-Directed Mechanical Ventilation Order Set Driven by Respiratory Therapists Improves Compliance With Best Practices for Mechanical Ventilation.

    PubMed

    Radosevich, Misty A; Wanta, Brendan T; Meyer, Todd J; Weber, Verlin W; Brown, Daniel R; Smischney, Nathan J; Diedrich, Daniel A

    2017-01-01

    Data regarding best practices for ventilator management strategies that improve outcomes in acute respiratory distress syndrome (ARDS) are readily available. However, little is known regarding processes to ensure compliance with these strategies. We developed a goal-directed mechanical ventilation order set that included physician-specified lung-protective ventilation and oxygenation goals to be implemented by respiratory therapists (RTs). We sought as a primary outcome to determine whether an RT-driven order set with predefined oxygenation and ventilation goals could be implemented and associated with improved adherence with best practice. We evaluated 1302 patients undergoing invasive mechanical ventilation (1693 separate episodes of invasive mechanical ventilation) prior to and after institution of a standardized, goal-directed mechanical ventilation order set using a controlled before-and-after study design. Patient-specific goals for oxygenation partial pressure of oxygen in arterial blood (Pao 2 ), ARDS Network [Net] positive end-expiratory pressure [PEEP]/fraction of inspired oxygen [Fio 2 ] table use) and ventilation (pH, partial pressure of carbon dioxide) were selected by prescribers and implemented by RTs. Compliance with the new mechanical ventilation order set was high: 88.2% compliance versus 3.8% before implementation of the order set ( P < .001). Adherence to the PEEP/Fio 2 table after implementation of the order set was significantly greater (86.0% after vs 82.9% before, P = .02). There was no difference in duration of mechanical ventilation, intensive care unit (ICU) length of stay, and in-hospital or ICU mortality. A standardized best practice mechanical ventilation order set can be implemented by a multidisciplinary team and is associated with improved compliance to written orders and adherence to the ARDSNet PEEP/Fio 2 table.

  11. Precision studies of observables in $$p p \\rightarrow W \\rightarrow l\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.

    This report was prepared in the context of the LPCC "Electroweak Precision Measurements at the LHC WG" and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the $W$ boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in themore » prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe.« less

  12. Precision studies of observables in $$p p \\rightarrow W \\rightarrow l\

    DOE PAGES

    Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.; ...

    2017-05-03

    This report was prepared in the context of the LPCC "Electroweak Precision Measurements at the LHC WG" and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the $W$ boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in themore » prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe.« less

  13. Strong Similarity Measures for Ordered Sets of Documents in Information Retrieval.

    ERIC Educational Resources Information Center

    Egghe, L.; Michel, Christine

    2002-01-01

    Presents a general method to construct ordered similarity measures in information retrieval based on classical similarity measures for ordinary sets. Describes a test of some of these measures in an information retrieval system that extracted ranked document sets and discuses the practical usability of the ordered similarity measures. (Author/LRW)

  14. A spatial panel ordered-response model with application to the analysis of urban land-use development intensity patterns

    NASA Astrophysics Data System (ADS)

    Ferdous, Nazneen; Bhat, Chandra R.

    2013-01-01

    This paper proposes and estimates a spatial panel ordered-response probit model with temporal autoregressive error terms to analyze changes in urban land development intensity levels over time. Such a model structure maintains a close linkage between the land owner's decision (unobserved to the analyst) and the land development intensity level (observed by the analyst) and accommodates spatial interactions between land owners that lead to spatial spillover effects. In addition, the model structure incorporates spatial heterogeneity as well as spatial heteroscedasticity. The resulting model is estimated using a composite marginal likelihood (CML) approach that does not require any simulation machinery and that can be applied to data sets of any size. A simulation exercise indicates that the CML approach recovers the model parameters very well, even in the presence of high spatial and temporal dependence. In addition, the simulation results demonstrate that ignoring spatial dependency and spatial heterogeneity when both are actually present will lead to bias in parameter estimation. A demonstration exercise applies the proposed model to examine urban land development intensity levels using parcel-level data from Austin, Texas.

  15. Recent vertical movements from precise levelling in the vicinity of the city of Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Schlatter, Andreas; Schneider, Dieter; Geiger, Alain; Kahle, Hans-Gert

    2005-09-01

    The southern end of the Upper Rhine Graben is one of the zones in Switzerland where recent crustal movements can be expected because of ongoing seismotectonic processes as witnessed by seismicity clusters occurring in this region. Therefore, in 1973 a control network with levelling profiles across the eastern Rhine Graben fault was installed and measured in the vicinity of the city of Basel in order to measure relative vertical movements and investigate their relationship with seismic events. As a contribution to EUCOR-URGENT, the profiles were observed a third time in the years 2002 and 2003 and connected to the Swiss national levelling network. The results of these local measurements are discussed in terms of accuracy and significance. Furthermore, they are combined and interpreted together with the extensive data set of recent vertical movements in Switzerland (Jura Mountains, Central Plateau and the Alps). In order to be able to prove height changes with precise levelling, their values should amount to at least 3 4 mm (1σ). The present investigations, however, have not shown any significant vertical movements over the past 30 years.

  16. Development and Validation of an Aquatic Fine Sediment Biotic Index

    NASA Astrophysics Data System (ADS)

    Relyea, Christina D.; Minshall, G. Wayne; Danehy, Robert J.

    2012-01-01

    The Fine Sediment Biotic Index (FSBI) is a regional, stressor-specific biomonitoring index to assess fine sediment (<2 mm) impacts on macroinvertebrate communities in northwestern US streams. We examined previously collected data of benthic macroinvertebrate assemblages and substrate particle sizes for 1,139 streams spanning 16 western US Level III Ecoregions to determine macroinvertebrate sensitivity (mostly at species level) to fine sediment. We developed FSBI for four ecoregion groupings that include nine of the ecoregions. The grouping were: the Coast (Coast Range ecoregion) (136 streams), Northern Mountains (Cascades, N. Rockies, ID Batholith ecoregions) (428 streams), Rockies (Middle Rockies, Southern Rockies ecoregions) (199 streams), and Basin and Plains (Columbia Plateau, Snake River Basin, Northern Basin and Range ecoregions) (262 streams). We excluded rare taxa and taxa identified at coarse taxonomic levels, including Chironomidae. This reduced the 685 taxa from all data sets to 206. Of these 93 exhibited some sensitivity to fine sediment which we classified into four categories: extremely, very, moderately, and slightly sensitive; containing 11, 22, 30, and 30 taxa, respectively. Categories were weighted and a FSBI score calculated by summing the sensitive taxa found in a stream. There were no orders or families that were solely sensitive or resistant to fine sediment. Although, among the three orders commonly regarded as indicators of high water quality, the Plecoptera (5), Trichoptera (3), and Ephemeroptera (2) contained all but one of the species or species groups classified as extremely sensitive. Index validation with an independent data set of 255 streams found FSBI scores to accurately predict both high and low levels of measured fine sediment.

  17. Handwritten word preprocessing for database adaptation

    NASA Astrophysics Data System (ADS)

    Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic

    2013-01-01

    Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.

  18. Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.

    PubMed

    Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth

    2015-10-13

    We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.

  19. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  20. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  1. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  2. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  3. Patterns of nocturnal rehydration in root tissues of Vaccinium corymbosum L. under severe drought conditions

    PubMed Central

    Valenzuela-Estrada, Luis R.; Richards, James H.; Diaz, Andres; Eissensat, David M.

    2009-01-01

    Although roots in dry soil layers are commonly rehydrated by internal hydraulic redistribution during the nocturnal period, patterns of tissue rehydration are poorly understood. Rates of nocturnal rehydration were examined in roots of different orders in Vaccinium corymbosum L. ‘Bluecrop’ (Northern highbush blueberry) grown in a split-pot system with one set of roots in relatively moist soil and the other set of roots in dry soil. Vaccinium is noted for a highly branched and extremely fine root system. It is hypothesized that nocturnal root tissue rehydration would be slow, especially in the distal root orders because of their greater hydraulic constraints (smaller vessel diameters and fewer number of vessels). Vaccinium root hydraulic properties delayed internal water movement. Even when water was readily available to roots in the wet soil and transpiration was minimal, it took a whole night-time period of 12 h for the distal finest roots (1st to 4th order) under dry soil conditions to reach the same water potentials as fine roots in moist soil (1st to 4th order). Even though roots under dry soil equilibrated with roots in moist soil, the equilibrium point reached before sunrise was about –1.2 MPa, indicating that tissues were not fully rehydrated. Using a single-branch root model, it was estimated that individual roots exhibiting the lowest water potentials in dry soil were 1st order roots (distal finest roots of the root system). However, considered at the branch level, root orders with the highest hydraulic resistances corresponded to the lowest orders of the permanent root system (3rd-, 4th-, and 5th-order roots), thus indicating possible locations of hydraulic safety control in the root system of this species. PMID:19188275

  4. Patterns of nocturnal rehydration in root tissues of Vaccinium corymbosum L. under severe drought conditions.

    PubMed

    Valenzuela-Estrada, Luis R; Richards, James H; Diaz, Andres; Eissensat, David M

    2009-01-01

    Although roots in dry soil layers are commonly rehydrated by internal hydraulic redistribution during the nocturnal period, patterns of tissue rehydration are poorly understood. Rates of nocturnal rehydration were examined in roots of different orders in Vaccinium corymbosum L. 'Bluecrop' (Northern highbush blueberry) grown in a split-pot system with one set of roots in relatively moist soil and the other set of roots in dry soil. Vaccinium is noted for a highly branched and extremely fine root system. It is hypothesized that nocturnal root tissue rehydration would be slow, especially in the distal root orders because of their greater hydraulic constraints (smaller vessel diameters and fewer number of vessels). Vaccinium root hydraulic properties delayed internal water movement. Even when water was readily available to roots in the wet soil and transpiration was minimal, it took a whole night-time period of 12 h for the distal finest roots (1st to 4th order) under dry soil conditions to reach the same water potentials as fine roots in moist soil (1st to 4th order). Even though roots under dry soil equilibrated with roots in moist soil, the equilibrium point reached before sunrise was about -1.2 MPa, indicating that tissues were not fully rehydrated. Using a single-branch root model, it was estimated that individual roots exhibiting the lowest water potentials in dry soil were 1st order roots (distal finest roots of the root system). However, considered at the branch level, root orders with the highest hydraulic resistances corresponded to the lowest orders of the permanent root system (3rd-, 4th-, and 5th-order roots), thus indicating possible locations of hydraulic safety control in the root system of this species.

  5. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    PubMed

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  6. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets

    PubMed Central

    Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-01-01

    Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861

  7. Health and safety—the downward trend in lead levels

    NASA Astrophysics Data System (ADS)

    Mayer, M. G.; Wilson, D. N.

    Lead has been known and used by man for thousands of years and its toxic properties have been known for almost as long. In consequence, a wide body of legislation has built up and is designed to protect individuals in both the occupational and the general environments. At the occupational level, two types of controls are widely employed, namely, lead-in-air and lead-in-blood. Limits placed on the amount of lead-in-air are designed to ensure that individuals are not exposed to unsafe levels of lead via inhalation. Currently, the most common standard is 0.15 mg m -3 but there is a clear downward trend and levels as low as 0.05 mg m -3 are mandatory in some countries. Controls on the amount of lead-in-blood give a more direct indication of the exposure experienced by individuals. The most common level presently employed is 70 μg m -3 but, as knowledge of the health effects of lead improves, lower levels are being introduced and 50 μg m -3 is now fairly common. While women are no more sensitive to lead than men, some countries do employ lower blood-lead limits for women in the workplace in order to protect any developing foetus. This paper examines the levels currently in force in various countries and describes developments which are now taking place in the legislation that is being enacted in several parts of the world. As far as the general public is concerned, only a relatively small number of countries employ controls. Where controls do exist, however, they are set at much lower levels than for the occupational environment in order to protect the most sensitive members of the population. Several countries employ limits on lead in ambient air. Traditionally, these have been set at either 1.5 or 2.0 μg m -3, but several countries are currently considering sharp downward revisions to levels of the order of 0.5 μg m -3. A few countries offer guidance on acceptable blood levels for the general population, most commonly for children. Again downward revisions are taking place but where data are available, there is a very encouraging downward trend also in average blood-lead levels found amongst members of the population. These must be due to a combination of factors which have reduced exposures to lead. The net result is that, at least in the more industrialized countries, average blood-lead levels have fallen to extremely low levels and very few individuals can be found with blood lead levels above currently accepted levels of concern.

  8. Quantitative controls on location and architecture of carbonate depositional sequences: Upper miocene, cabo de gata region, SE Spain

    USGS Publications Warehouse

    Franseen, E.K.; Goldstein, R.H.; Farr, M.R.

    1997-01-01

    Sequence stratigraphy, pinning-point relative sea-level curves, and magnetostratigraphy provide the quantitative data necessary to understand how rates of sea-level change and different substrate paleoslopes are dominant controls on accumulation rate, carbonate depositional sequence location, and internal architecture. Five third-order (1-10 my) and fourth-order (0.1-1.0 my) upper Miocene carbonate depositional sequences (DS1A, DS1B, DS2, DS3, TCC) formed with superimposed higher-frequency sea-level cycles in an archipelago setting in SE Spain. Overall, our study indicates when areas of high substrate slope (> 15??) are in shallow water, independent of climate, the location and internal architecture of carbonate deposits are not directly linked to sea-level position but, instead, are controlled by location of gently sloping substrates and processes of bypass. In contrast, if carbonate sediments are generated where substrates of low slope ( 15.6 cm/ky to ??? 2 cm/ky and overall relative sea level rose at rates of 17-21.4 cm/ky. Higher frequency sea-level rates were about 111 to more than 260 cm/ky, producing onlapping, fining- (deepening-) upward cycles. Decreasing accumulation rates resulted from decreasing surface area for shallow-water sediment production, drowning of shallow-water substrates, and complex sediment dispersal related to the archipelago setting. Typical systems tract and parasequence development should not be expected in "bypass ramp" settings; facies of onlapping strata do not track base level and are likely to be significantly different compared to onlapping strata associated with coastal onlap. Basal and upper DS2 reef megabreccias (indicating the transition from cool to warmer climatic conditions) were eroded from steep upslope positions and redeposited downslope onto areas of gentle substrate during rapid sea-level falls (> 22.7 cm/ky) of short duration. Such rapid sea-level falls and presence of steep slopes are not conducive to formation of forced regressive systems tracts composed of down-stepping reef clinoforms. The DS3 reefal platform formed where shallow water coincided with gently sloping substrates created by earlier deposition. Slow progradation (0.39-1.45 km/my) is best explained by the lack of an extensive bank top, progressively falling sea level, and low productivity resulting from siliciclastic debris and excess nutrients shed from nearby volcanic islands. Although DS3 strata were deposited during a third-order relative sea-level cycle, a typical transgressive systems tract is not recognizable, indicating that the initial relative rise in sea level was too rapid (??? 19 cm/ky). Downstepping reefs, forming a forced regressive systems tract, were deposited during the relative sea-level fall at the end of DS3, indicating that relatively slow rates of fall (10 cm/ky or less) over favorable paleoslope conditions are conducive to generation of forced regressive systems tracts consisting of downstepping reef clinoforms. The TCC sequence consists of four shallow-water sedimentary cycles that were deposited during a 400 ky to 100 ky time span. Such shallow-water cycles, typical of many platforms, form only where shallow water intersects gently sloping substrates. The relative thicknesses of cycles (< 2 m to 15 m thick), magnitudes of relative sea-level fluctuations associated with each cycle (25-30 m), high rates of relative sea-level fluctuations (minimum of 25-120 cm/ky), and the widespread distribution of similar TCC cycles in the Mediterranean and elsewhere are supportive of a glacio-eustatic

  9. Quantitative controls on location and architecture of carbonate depositional sequences: upper miocene, cabo de gata region, se Spain

    USGS Publications Warehouse

    Franseen, E.K.; Goldstein, R.H.; Farr, M.R.

    1998-01-01

    Sequence stratigraphy, pinning-point relative sea-level curves, and magnetostratigraphy provide the quantitative data necessary to understand how rates of sea-level change and different substrate paleoslopes are dominant controls on accumulation rate, carbonate depositional sequence location, and internal architecture. Five third-order (1-10 my) and fourth-order (0.1-1.0 my) upper Miocene carbonate depositional sequences (DS1A, DS1B, DS2, DS3, TCC) formed with superimposed higher-frequency sea-level cycles in an archipelago setting in SE Spain. Overall, our study indicates when areas of high substrate slope (> 15??) are in shallow water, independent of climate, the location and internal architecture of carbonate deposits are not directly linked to sea-level position but, instead, are controlled by location of gently sloping substrates and processes of bypass. In contrast, if carbonate sediments are generated where substrates of low slope ( 15.6 cm/ky to ??? 2 cm/ky and overall relative sea level rose at rates of 17-21.4 cm/ky. Higher frequency sea-level rates were about 111 to more than 260 cm/ky, producing onlapping, fining- (deepening-) upward cycles. Decreasing accumulation rates resulted from decreasing surface area for shallow-water sediment production, drowning of shallow-water substrates, and complex sediment dispersal related to the archipelago setting. Typical systems tract and parasequence development should not be expected in "bypass ramp" settings; facies of onlapping strata do not track base level and are likely to be significantly different compared to onlapping strata associated with coastal onlap. Basal and upper DS2 reef megabreccias (indicating the transition from cool to warmer climatic conditions) were eroded from steep upslope positions and redeposited downslope onto areas of gentle substrate during rapid sea-level falls (> 22.7 cm/ky) of short duration. Such rapid sea-level falls and presence of steep slopes are not conducive to formation of forced regressive systems tracts composed of downstepping reef clinoforms. The DS3 reefal platform formed where shallow water coincided with gently sloping substrates created by earlier deposition. Slow progradation (0.39-1.45 km/my) is best explained by the lack of an extensive bank top, progressively falling sea level, and low productivity resulting from siliciclastic debris and excess nutrients shed from nearby volcanic islands. Although DS3 strata were deposited during a third-order relative sea-level cycle, a typical transgresse??e systems tract is not recognizable, indicating that the initial relative rise in sea level was too rapid (??? 19 cm/ky). Downstepping reefs, forming a forced regressive systems tract, were deposited during the relative sea-level fall at the end of DS3, indicating that relatively slow rates of fall (10 cm/ky or less) over favorable paleoslope conditions are conducive to generation of forced regressive systems tracts consisting of downstepping reef clinoforms. The TCC sequence consists of four shallow -water sedimentary cycles that were deposited during a 400 ky to 100 ky time span. Such shallow-water cycles, typical of many platforms, form only where shallow water intersects gently sloping substrates. The relative thicknesses of cycles (< 2 m to 15 m thick), magnitudes of relative sea-level fluctuations associated with each cycle (25-30 m), high rates of relative sea-level fluctuations (minimum of 25-120 cm/ky), and the widespread distribution of similar TCC cycles in the Mediterranean and elsewhere are supportive of a glacio-eustati

  10. Experience-based co-design in an adult psychological therapies service.

    PubMed

    Cooper, Kate; Gillmore, Chris; Hogg, Lorna

    2016-01-01

    Experience-based co-design (EBCD) is a methodology for service improvement and development, which puts service-user voices at the heart of improving health services. The aim of this paper was to implement the EBCD methodology in a mental health setting, and to investigate the challenges which arise during this process. In order to achieve this, a modified version of the EBCD methodology was undertaken, which involved listening to the experiences of the people who work in and use the mental health setting and sharing these experiences with the people who could effect change within the service, through collaborative work between service-users, staff and managers. EBCD was implemented within the mental health setting and was well received by service-users, staff and stakeholders. A number of modifications were necessary in this setting, for example high levels of support available to participants. It was concluded that EBCD is a suitable methodology for service improvement in mental health settings.

  11. Assessing the impact of different satellite retrieval methods on forecast available potential energy

    NASA Technical Reports Server (NTRS)

    Whittaker, Linda M.; Horn, Lyle H.

    1990-01-01

    The effects of the inclusion of satellite temperature retrieval data, and of different satellite retrieval methods, on forecasts made with the NASA Goddard Laboratory for Atmospheres (GLA) fourth-order model were investigated using, as the parameter, the available potential energy (APE) in its isentropic form. Calculation of the APE were used to study the differences in the forecast sets both globally and in the Northern Hemisphere during 72-h forecast period. The analysis data sets used for the forecasts included one containing the NESDIS TIROS-N retrievals, the GLA retrievals using the physical inversion method, and a third, which did not contain satellite data, used as a control; two data sets, with and without satellite data, were used for verification. For all three data sets, the Northern Hemisphere values for the total APE showed an increase throughout the forecast period, mostly due to an increase in the zonal component, in contrast to the verification sets, which showed a steady level of total APE.

  12. Application of fuzzy set theory for integral assessment of agricultural products quality

    NASA Astrophysics Data System (ADS)

    Derkanosova, N. M.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.

    2018-05-01

    The methodology of integrated assessment of quality and safety of agricultural products, approbated by the example of indicators of wheat grain in relation to the provision of consumer properties of bakery products, was developed. Determination of the level of quality of the raw ingredients will allow direct using of agricultural raw materials for food production, taking into account ongoing technology, types of products, and, respectively, rational use of resource potential of the agricultural sector. The mathematical tool of the proposed method is a fuzzy set theory. The fuzzy classifier to evaluate the properties of the grain is formed. The set of six indicators normalized by the national standard is determined; values are ordered and represented by linguistic variables with a trapeziform membership function; the rules for calculation of membership functions are presented. Specific criteria values for individual indicators in shaping the quality of the finished products are considered. For one of the samples of wheat grain values of membership; functions of the linguistic variable "level" for all indicators and the linguistic variable "level of quality" were calculated. It is established that the studied sample of grain obtains the 2 (average) level of quality. Accordingly, it can be recommended for the production of bakery products with higher requirements for the structural-mechanical properties bakery and puff pastry products hearth bread and flour confectionery products of the group of hard dough cookies and crackers

  13. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  14. Effect of Sex Differences on Brain Mitochondrial Function and Its Suppression by Ovariectomy and in Aged Mice.

    PubMed

    Gaignard, Pauline; Savouroux, Stéphane; Liere, Philippe; Pianos, Antoine; Thérond, Patrice; Schumacher, Michael; Slama, Abdelhamid; Guennoun, Rachida

    2015-08-01

    Sex steroids regulate brain function in both normal and pathological states. Mitochondria are an essential target of steroids, as demonstrated by the experimental administration of 17β-estradiol or progesterone (PROG) to ovariectomized female rodents, but the influence of endogenous sex steroids remains understudied. To address this issue, mitochondrial oxidative stress, the oxidative phosphorylation system, and brain steroid levels were analyzed under 3 different experimental sets of endocrine conditions. The first set was designed to study steroid-mediated sex differences in young male and female mice, intact and after gonadectomy. The second set concerned young female mice at 3 time points of the estrous cycle in order to analyze the influence of transient variations in steroid levels. The third set involved the evaluation of the effects of a permanent decrease in gonadal steroids in aged male and female mice. Our results show that young adult females have lower oxidative stress and a higher reduced nicotinamide adenine dinucleotide (NADH)-linked respiration rate, which is related to a higher pyruvate dehydrogenase complex activity as compared with young adult males. This sex difference did not depend on phases of the estrous cycle, was suppressed by ovariectomy but not by orchidectomy, and no longer existed in aged mice. Concomitant analysis of brain steroids showed that pregnenolone and PROG brain levels were higher in females during the reproductive period than in males and decreased with aging in females. These findings suggest that the major male/female differences in brain pregnenolone and PROG levels may contribute to the sex differences observed in brain mitochondrial function.

  15. Sequence architecture of the Palaeocene Transitional Facies and Response to Tectonic Evolution and Sea Level Change in the Lishui Depression, East China Sea

    NASA Astrophysics Data System (ADS)

    Li, D.

    2016-12-01

    The Lishui Depression (LD) is a polycyclic rift basin located in the southwestern of the East China Sea Shelf Basin. From bottom to top, the Palaeocene strata sequentially comprise the Yueguifeng (YGF), Lingfeng (LF) and Mingyuefeng Formations (MYF). The YGF clastic deposits were produced by a continental lacustrine. The LF and MYF were a set of coal-bearing strata formed by marine transgressive-regressive cycles. The Palaeocene depositional cycle is divided into two second-order sequences, namely SQII1 (YGF, 66.5-60Ma) and SQII2 (LF and MYF, 60-53Ma), which can be interpreted as the initial rifting sequence and the strong rifting sequence respectively that controlled by episodic tectonic subsidence, namely Yandang and Oujiang movements. The SQII1 includes only one third-order sequence, namely SQIII1, which is constituted by lake transgressive systems tract (LTST) and lake regressive systems tract (LRST). The SQII2 can be subdivided into four third-order sequences, namely SQIII2 (Lower LF, 60-57Ma), SQIII3 (Upper LF, 57-55Ma), SQIII4 (Lower MYF, 55-54.5Ma) and SQIII5 (Upper MYF, 54.5-53Ma). In the SQIII2 period, LD suffered massive transgression and the sustained high relative sea level led to the only development of transgressive systems tract (TST) and highstand systems tract (HST). In the SQIII3 period, the relative sea level declined and simultaneously two sets of incised valley were recognized on the seismic reflection with no lowstand fan developed. So the SQIII3 is considered to be composed of basin margin systems tract (BMST, similar to the shelf margin systems tract), TST and HST. Early SQIII4 (55Ma ), the relative sea level started global rapid declining and the LST of LD developed a completed system of prograding wedge, incised valley and basin floor fan. While the TST developed a retrograding marine sediments and the HST was characterized by a typical foreset parasequences. In SQIII5 period, the global sea level continuously rose and the sedimentary cycle of LD was only composed of TST and HST.

  16. Integration of physical abuse clinical decision support into the electronic health record at a Tertiary Care Children's Hospital.

    PubMed

    Suresh, Srinivasan; Saladino, Richard A; Fromkin, Janet; Heineman, Emily; McGinn, Tom; Richichi, Rudolph; Berger, Rachel P

    2018-04-12

    To evaluate the effect of a previously validated electronic health record-based child abuse trigger system on physician compliance with clinical guidelines for evaluation of physical abuse. A randomized controlled trial (RCT) with comparison to a preintervention group was performed. RCT-experimental subjects' providers received alerts with a direct link to a physical abuse-specific order set. RCT-control subjects' providers had no alerts, but could manually search for the order set. Preintervention subjects' providers had neither alerts nor access to the order set. Compliance with clinical guidelines was calculated. Ninety-nine preintervention subjects and 130 RCT subjects (73 RCT-experimental and 57 RCT-control) met criteria to undergo a physical abuse evaluation. Full compliance with clinical guidelines was 84% pre-intervention, 86% in RCT-control group, and 89% in RCT-experimental group. The physical abuse order set was used 43 times during the 7-month RCT. When the abuse order set was used, full compliance was 100%. The proportion of cases in which there was partial compliance decreased from 10% to 3% once the order set became available (P = .04). Male gender, having >10 years of experience and completion of a pediatric emergency medicine fellowship were associated with increased compliance. A child abuse clinical decision support system comprised of a trigger system, alerts and a physical abuse order set was quickly accepted into clinical practice. Use of the physical abuse order set always resulted in full compliance with clinical guidelines. Given the high baseline compliance at our site, evaluation of this alert system in hospitals with lower baseline compliance rates will be more valuable in assessing the efficacy in adherence to clinical guidelines for the evaluation of suspected child abuse.

  17. Using intranet-based order sets to standardize clinical care and prepare for computerized physician order entry.

    PubMed

    Heffner, John E; Brower, Kathleen; Ellis, Rosemary; Brown, Shirley

    2004-07-01

    The high cost of computerized physician order entry (CPOE) and physician resistance to standardized care have delayed implementation. An intranet-based order set system can provide some of CPOE's benefits and offer opportunities to acculturate physicians toward standardized care. INTRANET CLINICIAN ORDER FORMS (COF): The COF system at the Medical University of South Carolina (MUSC) allows caregivers to enter and print orders through the intranet at points of care and to access decision support resources. Work on COF began in March 2000 with transfer of 25 MUSC paper-based order set forms to an intranet site. Physician groups developed additional order sets, which number more than 200. Web traffic increased progressively during a 24-month period, peaking at more than 6,400 hits per month to COF. Decision support tools improved compliance with Centers for Medicare & Medicaid Services core indicators. Clinicians demonstrated a willingness to develop and use order sets and decision support tools posted on the COF site. COF provides a low-cost method for preparing caregivers and institutions to adopt CPOE and standardization of care. The educational resources, relevant links to external resources, and communication alerts will all link to CPOE, thereby providing a head start in CPOE implementation.

  18. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.

  19. Increasing energy efficiency level of building production based on applying modern mechanization facilities

    NASA Astrophysics Data System (ADS)

    Prokhorov, Sergey

    2017-10-01

    Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.

  20. Hybrid method for moving interface problems with application to the Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, T.Y.; Li, Zhilin; Osher, S.

    In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less

  1. Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Sethian, James A.

    1997-01-01

    Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.

  2. Multireference configuration interaction calculations of the first six ionization potentials of the uranium atom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bross, David H.; Parmar, Payal; Peterson, Kirk A., E-mail: kipeters@wsu.edu

    The first 6 ionization potentials (IPs) of the uranium atom have been calculated using multireference configuration interaction (MRCI+Q) with extrapolations to the complete basis set limit using new all-electron correlation consistent basis sets. The latter was carried out with the third-order Douglas-Kroll-Hess Hamiltonian. Correlation down through the 5s5p5d electrons has been taken into account, as well as contributions to the IPs due to the Lamb shift. Spin-orbit coupling contributions calculated at the 4-component Kramers restricted configuration interaction level, as well as the Gaunt term computed at the Dirac-Hartree-Fock level, were added to the best scalar relativistic results. The final ionizationmore » potentials are expected to be accurate to at least 5 kcal/mol (0.2 eV) and thus more reliable than the current experimental values of IP{sub 3} through IP{sub 6}.« less

  3. Ab initio structural and spectroscopic study of HPS{sup x} and HSP{sup x} (x = 0,+1,−1) in the gas phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaghlane, Saida Ben; Cotton, C. Eric; Francisco, Joseph S., E-mail: francisc@purdue.edu, E-mail: hochlaf@univ-mlv.fr

    2013-11-07

    Accurate ab initio computations of structural and spectroscopic parameters for the HPS/HSP molecules and corresponding cations and anions have been performed. For the electronic structure computations, standard and explicitly correlated coupled cluster techniques in conjunction with large basis sets have been adopted. In particular, we present equilibrium geometries, rotational constants, harmonic vibrational frequencies, adiabatic ionization energies, electron affinities, and, for the neutral species, singlet-triplet relative energies. Besides, the full-dimensional potential energy surfaces (PESs) for HPS{sup x} and HSP{sup x} (x = −1,0,1) systems have been generated at the standard coupled cluster level with a basis set of augmented quintuple-zeta quality.more » By applying perturbation theory to the calculated PESs, an extended set of spectroscopic constants, including τ, first-order centrifugal distortion and anharmonic vibrational constants has been obtained. In addition, the potentials have been used in a variational approach to deduce the whole pattern of vibrational levels up to 4000 cm{sup −1} above the minima of the corresponding PESs.« less

  4. Everything under Control? The Effects of Age, Gender, and Education on Trajectories of Perceived Control in a Nationally Representative German Sample

    ERIC Educational Resources Information Center

    Specht, Jule; Egloff, Boris; Schmukle, Stefan C.

    2013-01-01

    Perceived control is an important variable for various demands involved in successful aging. However, perceived control is not set in stone but rather changes throughout the life course. The aim of this study was to identify cross-sectional age differences and longitudinal mean-level changes as well as rank-order changes in perceived control with…

  5. Getting It Right: Reference Guides for Registering Students with Non-English Names, 2nd Edition. REL 2016-158 v2

    ERIC Educational Resources Information Center

    Motamedi, Jason Greenberg; Jaffery, Zafreen; Hagen, Allyson; Yoon, Sun Young

    2017-01-01

    Getting a student's name right is the first step in welcoming him or her to school. Staff members who work with student-level data also know the importance of accurately and consistently recording a student's name in order to track student data over time, match files across data sets, and make meaning from the data. For students whose home…

  6. Getting It Right: Reference Guides for Registering Students with Non-English Names. REL 2016-158

    ERIC Educational Resources Information Center

    Motamedi, Jason Greenberg; Jaffery, Zafreen; Hagen, Allyson

    2016-01-01

    Getting a student's name right is the first step in welcoming him or her to school. Staff members who work with student-level data also know the importance of accurately and consistently recording a student's name in order to track student data over time, to match files across data sets, and to make meaning from the data. For students whose home…

  7. Defining the Pathophysiological Role of Tau in Experimental TBI

    DTIC Science & Technology

    2015-10-01

    structure vital for memory formation. Damage to the EC or perforant pathway projection in animals causes a rapid forgetting syndrome reminiscent of AD... animal experiments, and improves the discriminative power of the experimental design going forward. 9 3. Study questions, and numbers of mice...project, sera will be analyzed for a set of biomarkers for neuronal degeneration, in order to identify a marker whose blood levels are a surrogate measure

  8. Automatic firearm class identification from cartridge cases

    NASA Astrophysics Data System (ADS)

    Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.

    2011-03-01

    We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.

  9. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods

    NASA Astrophysics Data System (ADS)

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-01

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  10. Correlates of Social Functioning in Autism Spectrum Disorder: The Role of Social Cognition.

    PubMed

    Bishop-Fitzpatrick, Lauren; Mazefsky, Carla A; Eack, Shaun M; Minshew, Nancy J

    2017-03-01

    Individuals with autism spectrum disorder (ASD) experience marked challenges with social function by definition, but few modifiable predictors of social functioning in ASD have been identified in extant research. This study hypothesized that deficits in social cognition and motor function may help to explain poor social functioning in individuals with ASD. Cross-sectional data from 108 individuals with ASD and without intellectual disability ages 9 through 27.5 were used to assess the relationship between social cognition and motor function, and social functioning. Results of hierarchical multiple regression analyses revealed that greater social cognition, but not motor function, was significantly associated with better social functioning when controlling for sex, age, and intelligence quotient. Post-hoc analyses revealed that, better performance on second-order false belief tasks was associated with higher levels of socially adaptive behavior and lower levels of social problems. Our findings support the development and testing of interventions that target social cognition in order to improve social functioning in individuals with ASD. Interventions that teach generalizable skills to help people with ASD better understand social situations and develop competency in advanced perspective taking have the potential to create more durable change because their effects can be applied to a wide and varied set of situations and not simply a prescribed set of rehearsed situations.

  11. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods.

    PubMed

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-14

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  12. Pharmacy costs associated with nonformulary drug requests.

    PubMed

    Sweet, B V; Stevenson, J G

    2001-09-15

    Pharmacy costs associated with handling nonformulary drug requests were studied. Data for all nonformulary drug orders received at a university hospital between August 1 and October 31, 1999, were evaluated to determine their outcome and the cost differential between the nonformulary drug and formulary alternative. Two sets of data were used to analyze medication costs: data from nonformulary medication request forms, which allowed the cost of nonformulary drugs and their formulary alternatives to be calculated, and data from the pharmacy computer system, which enabled actual nonformulary drug use to be captured. Labor costs associated with processing these requests were determined through time analysis, which included the potential for orders to be received at different times of the day and with different levels of technician and pharmacist support. Economic analysis revealed that the greatest cost saving occurred when converting nonformulary injectable products to formulary alternatives. Interventions were least costly during normal business hours, when all the satellite pharmacies were open and fully staffed. Pharmacists' interventions in oral product orders resulted in a net increase in expenditures. Incremental pharmacy costs associated with processing nonformulary medication requests in an inpatient setting are greater than the drug acquisition cost saving for most agents, particularly oral medications.

  13. Molecular structure, vibrational spectra, NLO and MEP analysis of bis[2-hydroxy-кO-N-(2-pyridyl)-1-naphthaldiminato-кN]zinc(II)

    NASA Astrophysics Data System (ADS)

    Tanak, Hasan; Toy, Mehmet

    2013-11-01

    The molecular geometry and vibrational frequencies of bis[2-hydroxy-кO-N-(2-pyridyl)-1-naphthaldiminato-кN]zinc(II) in the ground state have been calculated by using the Hartree-Fock (HF) and density functional method (B3LYP) with 6-311G(d,p) basis set. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. The energetic and atomic charge behavior of the title compound in solvent media has been examined by applying the Onsager and the polarizable continuum model. To investigate second order nonlinear optical properties of the title compound, the electric dipole (μ), linear polarizability (α) and first-order hyperpolarizability (β) were computed using the density functional B3LYP and CAM-B3LYP methods with the 6-31+G(d) basis set. According to our calculations, the title compound exhibits nonzero (β) value revealing second order NLO behavior. In addition, DFT calculations of the title compound, molecular electrostatic potential (MEP), frontier molecular orbitals, and thermodynamic properties were performed at B3LYP/6-311G(d,p) level of theory.

  14. A Comprehensive Set of Impact Data for Common Aerospace Metals

    DOE PAGES

    Brake, Matthew; Reu, Phil L.; Aragon, Dannelle S.

    2017-05-16

    Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques:more » digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.« less

  15. A Comprehensive Set of Impact Data for Common Aerospace Metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brake, Matthew; Reu, Phil L.; Aragon, Dannelle S.

    Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques:more » digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.« less

  16. Thermal Noise Limit in Frequency Stabilization of Lasers with Rigid Cavities

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Kemery, Amy; Camp, Jordan

    2004-01-01

    We evaluated thermal noise (Brownian motion) in a rigid reference cavity used for frequency stabilization of lasers, based on the mechanical loss of cavity materials and the numerical analysis of the mirror-spacer mechanics with t.he direct application of the fluctuation dissipation theorem. This noise sets a fundamental limit for the frequency stability achieved with a rigid frequency- reference cavity of order 1 Hz/square root Hz(0.01 Hz/square root Hz) at 10 mHz (100 Hz) at room temperature. This level coincides with the world-highest level stabilization results.

  17. Rule-based navigation control design for autonomous flight

    NASA Astrophysics Data System (ADS)

    Contreras, Hugo; Bassi, Danilo

    2008-04-01

    This article depicts a navigation control system design that is based on a set of rules in order to follow a desired trajectory. The full control of the aircraft considered here comprises: a low level stability control loop, based on classic PID controller and the higher level navigation whose main job is to exercise lateral control (course) and altitude control, trying to follow a desired trajectory. The rules and PID gains were adjusted systematically according to the result of flight simulation. In spite of its simplicity, the rule-based navigation control proved to be robust, even with big perturbation, like crossing winds.

  18. Electromagnetic interference in electrical systems of motor vehicles

    NASA Astrophysics Data System (ADS)

    Dziubiński, M.; Drozd, A.; Adamiec, M.; Siemionek, E.

    2016-09-01

    Electronic ignition system affects the electronic equipment of the vehicle by electric and magnetic fields. The measurement of radio electromagnetic interference originating from the ignition system affecting the audiovisual test bench was carried out with a variable speed of the ignition system. The paper presents measurements of radio electromagnetic interference in automobiles. In order to determine the level of electromagnetic interference, the audiovisual test bench was equipped with a set of meters for power consumption and assessment of the level of electromagnetic interference. Measurements of the electromagnetic interference level within the audiovisual system were performed on an experimental test bench consisting of the ignition system, starting system and charging system with an alternator and regulator.

  19. A Unified Model for Predicting the Open Hole Tensile and Compressive Strengths of Composite Laminates for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Paul; Pineda, Evan J.; Heinrich, Christian; Waas, Anthony M.

    2013-01-01

    The open hole tensile and compressive strengths are important design parameters in qualifying fiber reinforced laminates for a wide variety of structural applications in the aerospace industry. In this paper, we present a unified model that can be used for predicting both these strengths (tensile and compressive) using the same set of coupon level, material property data. As a prelude to the unified computational model that follows, simplified approaches, referred to as "zeroth order", "first order", etc. with increasing levels of fidelity are first presented. The results and methods presented are practical and validated against experimental data. They serve as an introductory step in establishing a virtual building block, bottom-up approach to designing future airframe structures with composite materials. The results are useful for aerospace design engineers, particularly those that deal with airframe design.

  20. Non-unique key B-Tree implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ries, D.R.

    1980-12-23

    The B-Trees are an indexed method to allow fast retrieval and order preserving updates to a FRAMIS relation based on a designated set of keys in the relation. A B-Tree access method is being implemented to provide indexed and sequential (in index order) access to FRAMIS relations. The implementation modifies the basic B-Tree structure to correctly allow multiple key values and still maintain the balanced page fill property of B-Trees. The data structures of the B-Tree are presented first, including the FRAMIS solution to the duplicate key value problem. Then the access level routines and utilities are presented. These routinesmore » include the original B-Tree creation; searching the B-Tree; and inserting, deleting, and replacing tuples on the B-Tree. In conclusion, the uses of the B-Tree access structures at the semantic level to enhance the FRAMIS performance are discussed. 10 figures.« less

  1. [Animal Health Law-- the National Animal Health Act and the European Animal Health Law].

    PubMed

    Bätza, Hans-Joachim; Mettenleiter, Thomas

    2013-01-01

    The Animal Health Act that replaces the Animal Disease Act, which is currently in force, creates a regulatory framework in order to not only, as has been the case so far, control animal diseases that had already broken out, but in order to already prevent in advance possible outbreaks of animal diseases by means of preventive measures. The instruments to this effect are described here. At European level, too, the idea of prevention is set to play a greater role in the future, with the draft EU legal instrument on animal health, that has to date only been discussed at Commission level, also contributing to a simplification and easier implementation by the persons subject to law by harmonising the currently fragmented Community law. It remains to be seen when the deliberations in the Council and European Parliament will begin.

  2. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  3. Convergent evolution of the genomes of marine mammals

    USGS Publications Warehouse

    Foote, Andrew D.; Liu, Yue; Thomas, Gregg W.C.; Vinař, Tomáš; Alföldi, Jessica; Deng, Jixin; Dugan, Shannon; van Elk, Cornelis E.; Hunter, Margaret; Joshi, Vandita; Khan, Ziad; Kovar, Christie; Lee, Sandra L.; Lindblad-Toh, Kerstin; Mancia, Annalaura; Nielsen, Rasmus; Qin, Xiang; Qu, Jiaxin; Raney, Brian J.; Vijay, Nagarjun; Wolf, Jochen B. W.; Hahn, Matthew W.; Muzny, Donna M.; Worley, Kim C.; Gilbert, M. Thomas P.; Gibbs, Richard A.

    2015-01-01

    Marine mammals from different mammalian orders share several phenotypic traits adapted to the aquatic environment and therefore represent a classic example of convergent evolution. To investigate convergent evolution at the genomic level, we sequenced and performed de novo assembly of the genomes of three species of marine mammals (the killer whale, walrus and manatee) from three mammalian orders that share independently evolved phenotypic adaptations to a marine existence. Our comparative genomic analyses found that convergent amino acid substitutions were widespread throughout the genome and that a subset of these substitutions were in genes evolving under positive selection and putatively associated with a marine phenotype. However, we found higher levels of convergent amino acid substitutions in a control set of terrestrial sister taxa to the marine mammals. Our results suggest that, whereas convergent molecular evolution is relatively common, adaptive molecular convergence linked to phenotypic convergence is comparatively rare.

  4. Convergent evolution of the genomes of marine mammals

    PubMed Central

    Foote, Andrew D.; Liu, Yue; Thomas, Gregg W.C.; Vinař, Tomáš; Alföldi, Jessica; Deng, Jixin; Dugan, Shannon; van Elk, Cornelis E.; Hunter, Margaret E.; Joshi, Vandita; Khan, Ziad; Kovar, Christie; Lee, Sandra L.; Lindblad-Toh, Kerstin; Mancia, Annalaura; Nielsen, Rasmus; Qin, Xiang; Qu, Jiaxin; Raney, Brian J.; Vijay, Nagarjun; Wolf, Jochen B. W.; Hahn, Matthew W.; Muzny, Donna M.; Worley, Kim C.; Gilbert, M. Thomas P.; Gibbs, Richard A.

    2015-01-01

    Marine mammals from different mammalian orders share several phenotypic traits adapted to the aquatic environment and are therefore a classic example of convergent evolution. To investigate convergent evolution at the genomic level, we sequenced and de novo assembled the genomes of three species of marine mammals (the killer whale, walrus and manatee) from three mammalian orders that share independently evolved phenotypic adaptations to a marine existence. Our comparative genomic analyses found that convergent amino acid substitutions were widespread throughout the genome, and that a subset were in genes evolving under positive selection and putatively associated with a marine phenotype. However, we found higher levels of convergent amino acid substitutions in a control set of terrestrial sister taxa to the marine mammals. Our results suggest that while convergent molecular evolution is relatively common, adaptive molecular convergence linked to phenotypic convergence is comparatively rare. PMID:25621460

  5. Binary tree eigen solver in finite element analysis

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.

    1993-01-01

    This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.

  6. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    NASA Astrophysics Data System (ADS)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  7. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms

    PubMed Central

    Castillo-Cara, Manuel; Lovón-Melgarejo, Jesús; Bravo-Rocca, Gusseppe; Orozco-Barbosa, Luis; García-Varea, Ismael

    2017-01-01

    Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon. PMID:28590413

  8. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms.

    PubMed

    Castillo-Cara, Manuel; Lovón-Melgarejo, Jesús; Bravo-Rocca, Gusseppe; Orozco-Barbosa, Luis; García-Varea, Ismael

    2017-06-07

    Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

  9. A novel method for calculating the dynamic capillary force and correcting the pressure error in micro-tube experiment.

    PubMed

    Wang, Shuoliang; Liu, Pengcheng; Zhao, Hui; Zhang, Yuan

    2017-11-29

    Micro-tube experiment has been implemented to understand the mechanisms of governing microcosmic fluid percolation and is extensively used in both fields of micro electromechanical engineering and petroleum engineering. The measured pressure difference across the microtube is not equal to the actual pressure difference across the microtube. Taking into account the additional pressure losses between the outlet of the micro tube and the outlet of the entire setup, we propose a new method for predicting the dynamic capillary pressure using the Level-set method. We first demonstrate it is a reliable method for describing microscopic flow by comparing the micro-model flow-test results against the predicted results using the Level-set method. In the proposed approach, Level-set method is applied to predict the pressure distribution along the microtube when the fluids flow along the microtube at a given flow rate; the microtube used in the calculation has the same size as the one used in the experiment. From the simulation results, the pressure difference across a curved interface (i.e., dynamic capillary pressure) can be directly obtained. We also show that dynamic capillary force should be properly evaluated in the micro-tube experiment in order to obtain the actual pressure difference across the microtube.

  10. Empowerment theory: clarifying the nature of higher-order multidimensional constructs.

    PubMed

    Peterson, N Andrew

    2014-03-01

    Development of empowerment theory has focused on defining the construct at different levels of analysis, presenting new frameworks or dimensions, and explaining relationships between empowerment-related processes and outcomes. Less studied, and less conceptually developed, is the nature of empowerment as a higher-order multidimensional construct. One critical issue is whether empowerment is conceptualized as a superordinate construct (i.e., empowerment is manifested by its dimensions), an aggregate construct (i.e., empowerment is formed by its dimensions), or rather as a set of distinct constructs. To date, researchers have presented superordinate models without careful consideration of the relationships between dimensions and the higher-order construct of empowerment. Empirical studies can yield very different results, however, depending on the conceptualization of a construct. This paper represents the first attempt to address this issue systematically in empowerment theory. It is argued that superordinate models of empowerment are misspecified and research that tests alternative models at different levels of analysis is needed to advance theory, research, and practice in this area. Recommendations for future work are discussed.

  11. Flipping the classroom to teach Millennial residents medical leadership: a proof of concept.

    PubMed

    Lucardie, Alicia T; Berkenbosch, Lizanne; van den Berg, Jochem; Busari, Jamiu O

    2017-01-01

    The ongoing changes in health care delivery have resulted in the reform of educational content and methods of training in postgraduate medical leadership education. Health care law and medical errors are domains in medical leadership where medical residents desire training. However, the potential value of the flipped classroom as a pedagogical tool for leadership training within postgraduate medical education has not been fully explored. Therefore, we designed a learning module for this purpose and made use of the flipped classroom model to deliver the training. The flipped classroom model reverses the order of learning: basic concepts are learned individually outside of class so that more time is spent applying knowledge to discussions and practical scenarios during class. Advantages include high levels of interaction, optimal utilization of student and expert time and direct application to the practice setting. Disadvantages include the need for high levels of self-motivation and time constraints within the clinical setting. Educational needs and expectations vary within various generations and call for novel teaching modalities. Hence, the choice of instructional methods should be driven not only by their intrinsic values but also by their alignment with the learners' preference. The flipped classroom model is an educational modality that resonates with Millennial students. It helps them to progress quickly beyond the mere understanding of theory to higher order cognitive skills such as evaluation and application of knowledge in practice. Hence, the successful application of this model would allow the translation of highly theoretical topics to the practice setting within postgraduate medical education.

  12. Flipping the classroom to teach Millennial residents medical leadership: a proof of concept

    PubMed Central

    Lucardie, Alicia T; Berkenbosch, Lizanne; van den Berg, Jochem; Busari, Jamiu O

    2017-01-01

    Introduction The ongoing changes in health care delivery have resulted in the reform of educational content and methods of training in postgraduate medical leadership education. Health care law and medical errors are domains in medical leadership where medical residents desire training. However, the potential value of the flipped classroom as a pedagogical tool for leadership training within postgraduate medical education has not been fully explored. Therefore, we designed a learning module for this purpose and made use of the flipped classroom model to deliver the training. Evidence The flipped classroom model reverses the order of learning: basic concepts are learned individually outside of class so that more time is spent applying knowledge to discussions and practical scenarios during class. Advantages include high levels of interaction, optimal utilization of student and expert time and direct application to the practice setting. Disadvantages include the need for high levels of self-motivation and time constraints within the clinical setting. Discussion Educational needs and expectations vary within various generations and call for novel teaching modalities. Hence, the choice of instructional methods should be driven not only by their intrinsic values but also by their alignment with the learners’ preference. The flipped classroom model is an educational modality that resonates with Millennial students. It helps them to progress quickly beyond the mere understanding of theory to higher order cognitive skills such as evaluation and application of knowledge in practice. Hence, the successful application of this model would allow the translation of highly theoretical topics to the practice setting within postgraduate medical education. PMID:28144170

  13. Can eustatic charts go beyond first-order? Insights from the Permo-Triassic

    NASA Astrophysics Data System (ADS)

    Guillaume, Benjamin; Monteux, Julien; Pochat, Stéphane; Husson, Laurent; Choblet, Gaël

    2016-04-01

    To the first order, eustatic charts are in accord with our understanding of the geodynamic processes that control sea level. By extrapolation, second-order features are also thought to obey to the same rules, and are thus often taken for granted. But this assumption may be jeopardized by a close examination of a characteristic example. The Permo-Triassic period is characteristic for both its purported eustatic signal and its geodynamic and climatic setting are well defined and contrasted. Both the fragmentation of the Pangean supercontinent and the late Paleozoic melting of ice sheets argue for a rise of the eustatic sea level (ESL) whereas eustatic charts show the opposite. Here we review the possible mechanisms that could explain the apparent sea level low, and find that some of them do lower the ESL while others instead only modify the referential, either uplifting continents or tilting the margins where the control points are located. In the first category, we find that (i) dynamic deflections of the Earth surface above subduction zones and their location with respect to continents primarily control absolute sea level while the Pangean supercontinent forms and breaks up, (ii) endorheism that ubiquitously developed at the time of Pangean aggregation also contributed to lowering the ESL by storing water out of the oceanic reservoir. In the second category, we show that (i) the thermal uplift associated to supercontinental insulation and (ii) the dynamic uplift associated with the emplacement of a superplume both give rates of change in the range of long-term changes of ESL. We also show that (iii) the dynamic tilting of continental margins not only produces apparent sea level changes, but also modifies the absolute sea level, which in turn may end up in the paradoxical situation wherein fingerprints of ESL drop are found in the geological record whereas ESL is actually rising. We conclude that the establishment of second to third order absolute sea level changes may stay for a while a chimera.

  14. Trainees May Add Value to Patient Care by Decreasing Addendum Utilization in Radiology Reports.

    PubMed

    Balthazar, Patricia; Konstantopoulos, Christina; Wick, Carson A; DeSimone, Ariadne K; Tridandapani, Srini; Simoneaux, Stephen; Applegate, Kimberly E

    2017-11-01

    The purpose of this study was to evaluate the impact of trainee involvement and other factors on addendum rates in radiology reports. This retrospective study was performed in a tertiary care pediatric hospital. From the institutional radiology data repository, we extracted all radiology reports from January 1 to June 30, 2016, as well as trainee (resident or fellow) involvement, imaging modality, patient setting (emergency, inpatient, or outpatient), order status (routine vs immediate), time of interpretation (regular work hours vs off-hours), radiologist's years of experience, and sex. We grouped imaging modalities as advanced (CT, MRI, and PET) or nonadvanced (any modality that was not CT, MRI, or PET) and radiologist experience level as ≤ 20 years or > 20 years. Our outcome measure was the rate of addenda in radiology reports. Statistical analysis was performed using multivariate logistic regression. From 129,033 reports finalized during the study period, 418 (0.3%) had addenda. Reports generated without trainees were 12 times more likely than reports with trainee involvement to have addenda (odds ratio [OR] = 12.2, p < 0.001). Advanced imaging studies were more likely than nonadvanced studies to be associated with addendum use (OR = 4.7, p < 0.001). Reports generated for patients in emergency or outpatient settings had a slightly higher likelihood of addendum use than those in an inpatient setting (OR = 1.5, p = 0.04; and OR = 1.3, p = 0.04, respectively). Routine orders had a slightly higher likelihood of addendum use compared with immediate orders (OR = 1.3, p = 0.01). We found no difference in addendum use by radiologist's sex, radiologist's years of experience, emergency versus outpatient setting, or time of interpretation. Trainees may add value to patient care by decreasing addendum rates in radiology reports.

  15. The generation of arbitrary order, non-classical, Gauss-type quadrature for transport applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spence, Peter J., E-mail: peter.spence@awe.co.uk

    A method is presented, based upon the Stieltjes method (1884), for the determination of non-classical Gauss-type quadrature rules, and the associated sets of abscissae and weights. The method is then used to generate a number of quadrature sets, to arbitrary order, which are primarily aimed at deterministic transport calculations. The quadrature rules and sets detailed include arbitrary order reproductions of those presented by Abu-Shumays in [4,8] (known as the QR sets, but labelled QRA here), in addition to a number of new rules and associated sets; these are generated in a similar way, and we label them the QRS quadraturemore » sets. The method presented here shifts the inherent difficulty (encountered by Abu-Shumays) associated with solving the non-linear moment equations, particular to the required quadrature rule, to one of the determination of non-classical weight functions and the subsequent calculation of various associated inner products. Once a quadrature rule has been written in a standard form, with an associated weight function having been identified, the calculation of the required inner products is achieved using specific variable transformations, in addition to the use of rapid, highly accurate quadrature suited to this purpose. The associated non-classical Gauss quadrature sets can then be determined, and this can be done to any order very rapidly. In this paper, instead of listing weights and abscissae for the different quadrature sets detailed (of which there are a number), the MATLAB code written to generate them is included as Appendix D. The accuracy and efficacy (in a transport setting) of the quadrature sets presented is not tested in this paper (although the accuracy of the QRA quadrature sets has been studied in [12,13]), but comparisons to tabulated results listed in [8] are made. When comparisons are made with one of the azimuthal QRA sets detailed in [8], the inherent difficulty in the method of generation, used there, becomes apparent, with the highest order tabulated sets showing unexpected anomalies. Although not in an actual transport setting, the accuracy of the sets presented here is assessed to some extent, by using them to approximate integrals (over an octant of the unit sphere) of various high order spherical harmonics. When this is done, errors in the tabulated QRA sets present themselves at the highest tabulated orders, whilst combinations of the new QRS quadrature sets offer some improvements in accuracy over the original QRA sets. Finally, in order to offer a quick, visual understanding of the various quadrature sets presented, when combined to give product sets for the purposes of integrating functions confined to the surface of a sphere, three-dimensional representations of points located on an octant of the unit sphere (as in [8,12]) are shown.« less

  16. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  17. Computerized N-acetylcysteine physician order entry by template protocol for acetaminophen toxicity.

    PubMed

    Thompson, Trevonne M; Lu, Jenny J; Blackwood, Louisa; Leikin, Jerrold B

    2011-01-01

    Some medication dosing protocols are logistically complex for traditional physician ordering. The use of computerized physician order entry (CPOE) with templates, or order sets, may be useful to reduce medication administration errors. This study evaluated the rate of medication administration errors using CPOE order sets for N-acetylcysteine (NAC) use in treating acetaminophen poisoning. An 18-month retrospective review of computerized inpatient pharmacy records for NAC use was performed. All patients who received NAC for the treatment of acetaminophen poisoning were included. Each record was analyzed to determine the form of NAC given and whether an administration error occurred. In the 82 cases of acetaminophen poisoning in which NAC was given, no medication administration errors were identified. Oral NAC was given in 31 (38%) cases; intravenous NAC was given in 51 (62%) cases. In this retrospective analysis of N-acetylcysteine administration using computerized physician order entry and order sets, no medication administration errors occurred. CPOE is an effective tool in safely executing complicated protocols in an inpatient setting.

  18. Good practice statements on safe laboratory testing: A mixed methods study by the LINNEAUS collaboration on patient safety in primary care.

    PubMed

    Bowie, Paul; Forrest, Eleanor; Price, Julie; Verstappen, Wim; Cunningham, David; Halley, Lyn; Grant, Suzanne; Kelly, Moya; Mckay, John

    2015-09-01

    The systems-based management of laboratory test ordering and results handling is a known source of error in primary care settings worldwide. The consequences are wide-ranging for patients (e.g. avoidable harm or poor care experience), general practitioners (e.g. delayed clinical decision making and potential medico-legal implications) and the primary care organization (e.g. increased allocation of resources to problem-solve and dealing with complaints). Guidance is required to assist care teams to minimize associated risks and improve patient safety. To identify, develop and build expert consensus on 'good practice' guidance statements to inform the implementation of safe systems for ordering laboratory tests and managing results in European primary care settings. Mixed methods studies were undertaken in the UK and Ireland, and the findings were triangulated to develop 'good practice' statements. Expert consensus was then sought on the findings at the wider European level via a Delphi group meeting during 2013. We based consensus on 10 safety domains and developed 77 related 'good practice' statements (≥ 80% agreement levels) judged to be essential to creating safety and minimizing risks in laboratory test ordering and subsequent results handling systems in international primary care. Guidance was developed for improving patient safety in this important area of primary care practice. We need to consider how this guidance can be made accessible to frontline care teams, utilized by clinical educators and improvement advisers, implemented by decision makers and evaluated to determine acceptability, feasibility and impacts on patient safety.

  19. Assessing Strategies to Manage Work and Life Balance of Athletic Trainers Working in the National Collegiate Athletic Association Division I Setting

    PubMed Central

    Mazerolle, Stephanie M.; Pitney, William A.; Casa, Douglas J.; Pagnotta, Kelly D.

    2011-01-01

    Abstract Context: Certified athletic trainers (ATs) working at the National Collegiate Athletic Association Division I level experience challenges balancing their professional and personal lives. However, an understanding of the strategies ATs use to promote a balance between their professional and personal lives is lacking. Objective: To identify the strategies ATs employed in the Division I setting use to establish a balance between their professional and personal lives. Design: Qualitative investigation using inductive content analysis. Setting: Athletic trainers employed at Division I schools from 5 National Athletic Trainers' Association districts. Patients or Other Participants: A total of 28 (15 women, 13 men) ATs aged 35 ± 9 years volunteered for the study. Data Collection and Analysis: Asynchronous electronic interviews with follow-up phone interviews. Data were analyzed using inductive content analysis. Peer review, member checking, and data-source triangulation were conducted to establish trustworthiness. Results: Three higher-order themes emerged from the analysis. The initial theme, antecedents of work–family conflict, focused on the demands of the profession, flexibility of work schedules, and staffing patterns as contributing to work–life conflict for this group of ATs. The other 2 emergent higher-order themes, professional factors and personal factors, describe the components of a balanced lifestyle. The second-order theme of constructing the professional factors included both organizational policies and individual strategies, whereas the second-order theme of personal factors was separation of work and life and a supportive personal network. Conclusions: Long work hours, lack of control over work schedules, and unbalanced athlete-to-AT ratios can facilitate conflicts. However, as demonstrated by our results, several organizational and personal strategies can be helpful in creating a balanced lifestyle. PMID:21391805

  20. Quantification of Coffea arabica and Coffea canephora var. robusta concentration in blends by means of synchronous fluorescence and UV-Vis spectroscopies.

    PubMed

    Dankowska, A; Domagała, A; Kowalewski, W

    2017-09-01

    The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Reconstructing the temporal ordering of biological samples using microarray data.

    PubMed

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  2. Time-lag of the earthquake energy release between three seismic regions

    NASA Astrophysics Data System (ADS)

    Tsapanos, Theodoros M.; Liritzis, Ioannis

    1992-06-01

    Three complete data sets of strong earthquakes ( M≥5.5), which occurred in the seismic regions of Chile, Mexico and Kamchatka during the time period 1899 1985, have been used to test the existence of a time-lag in the seismic energy release between these regions. These data sets were cross-correlated in order to determine whether any pair of the sets are correlated. For this purpose statistical tests, such as the T-test, the Fisher's transformation and probability distribution have been applied to determine the significance of the obtained correlation coefficients. The results show that the time-lag between Chile and Kamchatka is -2, which means that Kamchatka precedes Chile by 2 years, with a correlation coefficient significant at 99.80% level, a weak correlation between Kamchatka-Mexico and noncorrelation for Mexico-Chile.

  3. Tools for the functional interpretation of metabolomic experiments.

    PubMed

    Chagoyen, Monica; Pazos, Florencio

    2013-11-01

    The so-called 'omics' approaches used in modern biology aim at massively characterizing the molecular repertories of living systems at different levels. Metabolomics is one of the last additions to the 'omics' family and it deals with the characterization of the set of metabolites in a given biological system. As metabolomic techniques become more massive and allow characterizing larger sets of metabolites, automatic methods for analyzing these sets in order to obtain meaningful biological information are required. Only recently the first tools specifically designed for this task in metabolomics appeared. They are based on approaches previously used in transcriptomics and other 'omics', such as annotation enrichment analysis. These, together with generic tools for metabolic analysis and visualization not specifically designed for metabolomics will for sure be in the toolbox of the researches doing metabolomic experiments in the near future.

  4. CCSDT calculations of molecular equilibrium geometries

    NASA Astrophysics Data System (ADS)

    Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve

    1997-08-01

    CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.

  5. Cheap-GSHPs, an European project aiming cost-reducing innovations for shallow geothermal installations. - Geological data reinterpretation

    NASA Astrophysics Data System (ADS)

    Bertermann, David; Müller, Johannes; Galgaro, Antonio; Cultrera, Matteo; Bernardi, Adriana; Di Sipio, Eloisa

    2016-04-01

    The success and widespread diffusion of new sustainable technologies are always strictly related to their affordability. Nowadays the energy price fluctuations and the economic crisis are jeopardizing the development and diffusion of renewable technologies and sources. With the aim of both reduce the overall costs of shallow geothermal systems and improve their installation safety, an European project has took place recently, under the Horizon 2020 EU Framework Programme for Research and Innovation. The acronym of this project is Cheap-GSHPs, meaning "cheap and efficient application of reliable ground source heat exchangers and pumps"; the CHEAP-GSHPs project involves 17 partners among 9 European countries such Belgium, France, Germany, Greece, Ireland, Italy, Romania, Spain, Switzerland. In order to achieve the planned targets, an holistic approach is adopted, where all involved elements that take part of shallow geothermal activities are here integrated. In order to reduce the drilling specific costs and for a solid planning basis the INSPIRE-conformal ESDAC data set PAR-MAT-DOM ("parent material dominant") was analysed and reinterpreted regarding the opportunities for cost reductions. Different ESDAC classification codes were analysed lithologically and sedimentologically in order to receive the most suitable drilling technique within different formations. Together with drilling companies this geological data set was translated into a geotechnical map which allows drilling companies the usage of the most efficient drilling within a certain type of underground. The scale of the created map is 1: 100,000 for all over Europe. This leads to cost reductions for the final consumers. Further there will be the definition of different heat conductivity classes based on the reinterpreted PAR-MAT-DOM data set which will provide underground information. These values will be reached by sampling data all over Europe and literature data. The samples will be measured by several different laboratory instruments in variable states of saturation. Literature data are then also compared to the resulting laboratory measurements. All in all this new data set will provide the development of more efficient cost planning tools. It provides detailed underground information on an European-wide level and the dimensioning of a spatial geothermal installation can be optimised. In order to provide a new drilling cost estimation, a new parameter called "drillability" is here suggested; the drillability is based on the drilling time for different type of rocks/sediments. The results are cost reductions which makes geothermal energy solution more attractive for end consumers especially on residential levels.

  6. RAMONA: a Web application for gene set analysis on multilevel omics data.

    PubMed

    Sass, Steffen; Buettner, Florian; Mueller, Nikola S; Theis, Fabian J

    2015-01-01

    Decreasing costs of modern high-throughput experiments allow for the simultaneous analysis of altered gene activity on various molecular levels. However, these multi-omics approaches lead to a large amount of data, which is hard to interpret for a non-bioinformatician. Here, we present the remotely accessible multilevel ontology analysis (RAMONA). It offers an easy-to-use interface for the simultaneous gene set analysis of combined omics datasets and is an extension of the previously introduced MONA approach. RAMONA is based on a Bayesian enrichment method for the inference of overrepresented biological processes among given gene sets. Overrepresentation is quantified by interpretable term probabilities. It is able to handle data from various molecular levels, while in parallel coping with redundancies arising from gene set overlaps and related multiple testing problems. The comprehensive output of RAMONA is easy to interpret and thus allows for functional insight into the affected biological processes. With RAMONA, we provide an efficient implementation of the Bayesian inference problem such that ontologies consisting of thousands of terms can be processed in the order of seconds. RAMONA is implemented as ASP.NET Web application and publicly available at http://icb.helmholtz-muenchen.de/ramona. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  8. Analysis of Six Reviews on the Quality of Instruments for the Evaluation of Interprofessional Education in German-Speaking Countries.

    PubMed

    Ehlers, Jan P; Kaap-Fröhlich, Sylvia; Mahler, Cornelia; Scherer, Theresa; Huber, Marion

    2017-01-01

    Background: More and more institutions worldwide and in German-speaking countries are developing and establishing interprofessional seminars in undergraduate education of health professions. In order to evaluate the different didactic approaches and different outcomes regarding the anticipated interprofessional competencies, it is necessary to apply appropriate instruments. Cross-cultural instruments are particularly helpful for international comparability. The Interprofessional Education working group of the German Medical Association (GMA) aims at identifying existing instruments for the evaluation of interprofessional education in order to make recommendations for German-speaking countries. Methods: Systematic literature research was performed on the websites of international interprofessional organisations (CAIPE, EIPEN, AIPEN), as well as in the PubMed and Cinahl databases. Reviews focusing on quantitative instruments to evaluate competencies according to the modified Kirkpatrick competency levels were searched for. Psychometrics, language/country and setting, in which the instrument was applied, were recorded. Results: Six reviews out of 73 literature research hits were included. A large number of instruments were identified; however, their psychometrics and the applied setting were very heterogeneous. The instruments can mainly be assigned to Kirkpatrick levels 1, 2a & 2b. Most instruments have been developed in English but their psychometrics were not always reported rigorously. Only very few instruments are available in German. Conclusion: It is difficult to find appropriate instruments in German. Internationally, there are different approaches and objectives in the measurement and evaluation of interprofessional competencies. The question arises whether it makes sense to translate existing instruments or to go through the lengthy process of developing new ones. The evaluation of interprofessional seminars with quantitative instruments remains mainly on Kirkpatrick levels 1 and 2. Levels 3 and 4 can probably only be assessed with qualitative or mixed methods. German language instruments are necessary.

  9. Analysis of Six Reviews on the Quality of Instruments for the Evaluation of Interprofessional Education in German-Speaking Countries

    PubMed Central

    Ehlers, Jan P.; Kaap-Fröhlich, Sylvia; Mahler, Cornelia; Scherer, Theresa; Huber, Marion

    2017-01-01

    Background: More and more institutions worldwide and in German-speaking countries are developing and establishing interprofessional seminars in undergraduate education of health professions. In order to evaluate the different didactic approaches and different outcomes regarding the anticipated interprofessional competencies, it is necessary to apply appropriate instruments. Cross-cultural instruments are particularly helpful for international comparability. The Interprofessional Education working group of the German Medical Association (GMA) aims at identifying existing instruments for the evaluation of interprofessional education in order to make recommendations for German-speaking countries. Methods: Systematic literature research was performed on the websites of international interprofessional organisations (CAIPE, EIPEN, AIPEN), as well as in the PubMed and Cinahl databases. Reviews focusing on quantitative instruments to evaluate competencies according to the modified Kirkpatrick competency levels were searched for. Psychometrics, language/country and setting, in which the instrument was applied, were recorded. Results: Six reviews out of 73 literature research hits were included. A large number of instruments were identified; however, their psychometrics and the applied setting were very heterogeneous. The instruments can mainly be assigned to Kirkpatrick levels 1, 2a & 2b. Most instruments have been developed in English but their psychometrics were not always reported rigorously. Only very few instruments are available in German. Conclusion: It is difficult to find appropriate instruments in German. Internationally, there are different approaches and objectives in the measurement and evaluation of interprofessional competencies. The question arises whether it makes sense to translate existing instruments or to go through the lengthy process of developing new ones. The evaluation of interprofessional seminars with quantitative instruments remains mainly on Kirkpatrick levels 1 and 2. Levels 3 and 4 can probably only be assessed with qualitative or mixed methods. German language instruments are necessary. PMID:28890927

  10. Deep Water Ambient Noise and Mode Processing

    DTIC Science & Technology

    2014-09-30

    of the Church Opal data set showed that noise levels decreased substantially (on the order of 20 dB) below the critical depth [6]. This project is...experiment have comparable slopes, whereas the Church Opal experiment shows a much sharper decrease. This supports Shooter et al.’s hypothesis that the...Moonless Mountains shielded the Church Opal site from noise generated in the shipping lanes located primarily to the north of that underwater range [8

  11. Army National Guard Medical Readiness Training Exercises in Southern Command

    DTIC Science & Technology

    1994-06-03

    pulled , the number of people in a health education class, or the number of procedures performed in order to assess the success of the program...the dental set was configured for general dentistry, but the missions often entailed pulling teeth rather than restoration of teeth. The staff...participating units’ level would benefit the academic world and may reveal valuable clues as to how to modify the program. The goali and objectives

  12. Diagnostic and laboratory test ordering in Northern Portuguese Primary Health Care: a cross-sectional study

    PubMed Central

    Sá, Luísa; Teixeira, Andreia Sofia Costa; Tavares, Fernando; Costa-Santos, Cristina; Couto, Luciana; Costa-Pereira, Altamiro; Hespanhol, Alberto Pinto; Santos, Paulo

    2017-01-01

    Objectives To characterise the test ordering pattern in Northern Portugal and to investigate the influence of context-related factors, analysing the test ordered at the level of geographical groups of family physicians and at the level of different healthcare organisations. Design Cross-sectional study. Setting Northern Primary Health Care, Portugal. Participants Records about diagnostic and laboratory tests ordered from 2035 family physicians working at the Northern Regional Health Administration, who served approximately 3.5 million Portuguese patients, in 2014. Outcomes To determine the 20 most ordered diagnostic and laboratory tests in the Northern Regional Health Administration; to identify the presence and extent of variations in the 20 most ordered diagnostic and laboratory tests between the Groups of Primary Care Centres and between health units; and to study factors that may explain these variations. Results The 20 most ordered diagnostic and laboratory tests almost entirely comprise laboratory tests and account for 70.9% of the total tests requested. We can trace a major pattern of test ordering for haemogram, glucose, lipid profile, creatinine and urinalysis. There was a significant difference (P<0.001) in test orders for all tests between Groups of Primary Care Centres and for all tests, except glycated haemoglobin (P=0.06), between health units. Generally, the Personalised Healthcare Units ordered more than Family Health Units. Conclusions The results from this study show that the most commonly ordered tests in Portugal are laboratory tests, that there is a tendency for overtesting and that there is a large variability in diagnostic and laboratory test ordering in different geographical and organisational Portuguese primary care practices, suggesting that there may be considerable potential for the rationalisation of test ordering. The existence of Family Health Units seems to be a strong determinant in decreasing test ordering by Portuguese family physicians. Approaches to ensuring more rational testing are needed. PMID:29146654

  13. Detecting a proper patient with a help of medical data retrieval

    NASA Astrophysics Data System (ADS)

    Malecka-Massalska, Teresa; Maciejewski, Ryszard; Wasiewicz, Piotr; Zaluska, Wojciech; Ksiazek, Andrzej

    2009-06-01

    Electric bioimpedance is one of methods to assess the hydrate status in hemodialyzed patients. It is also being used for assessing the hydration level among peritoneal dialysed patients, diagnosed with neoplastic diseases, patients after organ transplantations and the ones infected with HIV virus. During measurements sets were obtained from two groups, which were named a control (healthy volunteers) and test group (hemodialyzed patients). Zscored, discretized data and data retrieval results were computed in R language environment in order to find a simple rule for recognizing health problems. The executed experiments affirm possibilities of creating good classifiers for detecting a proper patient with the help of medical data sets, but only with previous training.

  14. Splitting of the Low Landau Levels into a Set of Positive Lebesgue Measure under Small Periodic Perturbations

    NASA Astrophysics Data System (ADS)

    Dinaburg, E. I.; Sinai, Ya. G.; Soshnikov, A. B.

    We study the spectral properties of a two-dimensional Schrödinger operator with a uniform magnetic field and a small external periodic field: where and , are small parameters. Representing as the direct integral of one-dimensional quasi-periodic difference operators with long-range potential and employing recent results of E.I.Dinaburg about Anderson localization for such operators (we assume to be typical irrational) we construct the full set of generalised eigenfunctions for the low Landau bands. We also show that the Lebesgue measure of the low bands is positive and proportional in the main order to .

  15. Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy

    PubMed Central

    Tessler, Morgan P.

    2016-01-01

    Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370

  16. Calculations with spectroscopic accuracy for energies, transition rates, hyperfine interaction constants, and Landé gJ-factors in nitrogen-like Kr XXX

    NASA Astrophysics Data System (ADS)

    Wang, K.; Li, S.; Jönsson, P.; Fu, N.; Dang, W.; Guo, X. L.; Chen, C. Y.; Yan, J.; Chen, Z. B.; Si, R.

    2017-01-01

    Extensive self-consistent multi-configuration Dirac-Fock (MCDF) calculations and second-order many-body perturbation theory (MBPT) calculations are performed for the lowest 272 states belonging to the 2s22p3, 2s2p4, 2p5, 2s22p23l, and 2s2p33l (l=s, p, d) configurations of N-like Kr XXX. Complete and consistent data sets of level energies, wavelengths, line strengths, oscillator strengths, lifetimes, AJ, BJ hyperfine interaction constants, Landé gJ-factors, and electric dipole (E1), magnetic dipole (M1), electric quadrupole (E2), magnetic quadrupole (M2) transition rates among all these levels are given. The present MCDF and MBPT results are compared with each other and with other available experimental and theoretical results. The mean relative difference between our two sets of level energies is only about 0.003% for these 272 levels. The accuracy of the present calculations are high enough to facilitate identification of many observed spectral lines. These accurate data can be served as benchmark for other calculations and can be useful for fusion plasma research and astrophysical applications.

  17. On reinitializing level set functions

    NASA Astrophysics Data System (ADS)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  18. Chemical trend of acceptor levels of Be, Mg, Zn, and Cd in GaAs, GaP, InP and GaN

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Chen, An-Ban

    2000-03-01

    We are investigating the “shallow” acceptor levels in the III-nitride semiconductors theoretically. The k·p Hamiltonians and a model central-cell impurity potential have been used to evaluate the ordering of the ionization energies of impurities Be, Mg, Zn, and Cd in GaN. The impurity potential parameters were obtained from studying the same set of impurities in GaAs. These parameters were then transferred to the calculation for other hosts, leaving only one adjustable screening parameter for each host. This procedure was tested in GaP and InP and remarkably good results were obtained. When applied to GaN, this procedure produced a consistent set of acceptor levels with different k·p Hamiltonians. The calculated ionization energies for Be, Mg, Zn and Cd acceptors in GaN are respectively145, 156, 192, and 312 meV for the zincblende structure, and 229, 250, 320, and 510 meV for the wurtzite structure. These and other results will be discussed.

  19. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  20. The calculation of accurate harmonic frequencies of large molecules: the polycyclic aromatic hydrocarbons, a case study

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W.; Langhoff, Stephen R.

    1997-07-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Møller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral polycyclic aromatic hydrocarbons (PAHs) where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  1. Improving the power of an efficacy study of a social and emotional learning program: application of generalizability theory to the measurement of classroom-level outcomes.

    PubMed

    Mashburn, Andrew J; Downer, Jason T; Rivers, Susan E; Brackett, Marc A; Martinez, Andres

    2014-04-01

    Social and emotional learning programs are designed to improve the quality of social interactions in schools and classrooms in order to positively affect students' social, emotional, and academic development. The statistical power of group randomized trials to detect effects of social and emotional learning programs and other preventive interventions on setting-level outcomes is influenced by the reliability of the outcome measure. In this paper, we apply generalizability theory to an observational measure of the quality of classroom interactions that is an outcome in a study of the efficacy of a social and emotional learning program called The Recognizing, Understanding, Labeling, Expressing, and Regulating emotions Approach. We estimate multiple sources of error variance in the setting-level outcome and identify observation procedures to use in the efficacy study that most efficiently reduce these sources of error. We then discuss the implications of using different observation procedures on both the statistical power and the monetary costs of conducting the efficacy study.

  2. Engaging adolescents with LD in higher order thinking about history concepts using integrated content enhancement routines.

    PubMed

    Bulgren, Janis; Deshler, Donald D; Lenz, B Keith

    2007-01-01

    The understanding and use of historical concepts specified in national history standards pose many challenges to students. These challenges include both the acquisition of content knowledge and the use of that knowledge in ways that require higher order thinking. All students, including adolescents with learning disabilities (LD), are expected to understand and use concepts of history to pass high-stakes assessments and to participate meaningfully in a democratic society. This article describes Content Enhancement Routines (CERs) to illustrate instructional planning, teaching, and assessing for higher order thinking with examples from an American history unit. Research on the individual components of Content Enhancement Routines will be illustrated with data from 1 of the routines. The potential use of integrated sets of materials and procedures across grade levels and content areas will be discussed.

  3. Bounded Error Schemes for the Wave Equation on Complex Domains

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir

    1998-01-01

    This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.

  4. A compression algorithm for the combination of PDF sets.

    PubMed

    Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme

    The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

  5. Carcinogenic and neurotoxic risks of acrylamide consumed through caffeinated beverages among the lebanese population.

    PubMed

    El-Zakhem Naous, Ghada; Merhi, Areej; Abboud, Martine I; Mroueh, Mohamad; Taleb, Robin I

    2018-06-06

    The present study aims to quantify acrylamide in caffeinated beverages including American coffee, Lebanese coffee, espresso, instant coffee and hot chocolate, and to determine their carcinogenic and neurotoxic risks. A survey was carried for this purpose whereby 78% of the Lebanese population was found to consume at least one type of caffeinated beverages. Gas Chromatography Mass Spectrometry analysis revealed that the average acrylamide level in caffeinated beverages is 29,176 μg/kg sample. The daily consumption of acrylamide from Lebanese coffee (10.9 μg/kg-bw/day), hot chocolate (1.2 μg/kg-bw/day) and Espresso (7.4 μg/kg-bw/day) was found to be higher than the risk intake for carcinogenicity and neurotoxicity as set by World Health Organization (WHO; 0.3-2 μg/kg-bw/day) at both the mean (average consumers) and high (high consumers) dietary exposures. On the other hand, American coffee (0.37 μg/kg-bw/day) was shown to pose no carcinogenic or neurotoxic risks among the Lebanese community for consumers with a mean dietary exposure. The study shows alarming results that call for regulating the caffeinated product industry by setting legislations and standard protocols for product preparation in order to limit the acrylamide content and protect consumers. In order to avoid carcinogenic and neurotoxic risks, we propose that WHO/FAO set acrylamide levels in caffeinated beverages to 7000 μg acrylamide/kg sample, a value which is 4-folds lower than the average acrylamide levels of 29,176 μg/kg sample found in caffeinated beverages sold in the Lebanese market. Alternatively, consumers of caffeinated products, especially Lebanese coffee and espresso, would have to lower their daily consumption to 0.3-0.4 cups/day. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Ethical problems in an era where disasters have become a part of daily life: A qualitative study of healthcare workers in Turkey.

    PubMed

    Civaner, M Murat; Vatansever, Kevser; Pala, Kayihan

    2017-01-01

    Natural disasters, armed conflict, migration, and epidemics today occur more frequently, causing more death, displacement of people and economic loss. Their burden on health systems and healthcare workers (HCWs) is getting heavier accordingly. The ethical problems that arise in disaster settings may be different than the ones in daily practice, and can cause preventable harm or the violation of basic human rights. Understanding the types and the determinants of ethical challenges is crucial in order to find the most benevolent action while respecting the dignity of those affected people. Considering the limited scope of studies on ethical challenges within disaster settings, we set upon conducting a qualitative study among local HCWs. Our study was conducted in six cities of Turkey, a country where disasters are frequent, including armed conflict, terrorist attacks and a massive influx of refugees. In-depth interviews were carried out with a total of 31 HCWs working with various backgrounds and experience. Data analysis was done concurrently with ongoing interviews. Several fundamental elements currently hinder ethics in relief work. Attitudes of public authorities, politicians and relief organizations, the mismanagement of impromptu humanitarian action and relief and the media's mindset create ethical problems on the macro-level such as discrimination, unjust resource allocation and violation of personal rights, and can also directly cause or facilitate the emergence of problems on the micro-level. An important component which prevents humanitarian action towards victims is insufficient competence. The duty to care during epidemics and armed conflicts becomes controversial. Many participants defend a paternalistic approach related to autonomy. Confidentiality and privacy are either neglected or cannot be secured. Intervention in factors on the macro-level could have a significant effect in problem prevention. Improving guidelines and professional codes as well as educating HCWs are also areas for improvement. Also, ethical questions exposed within this study should be deliberated and actualized with universal consensus in order to guide HCWs and increase humane attitudes.

  7. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  8. Importance of proper renormalization scale-setting for QCD testing at colliders

    DOE PAGES

    Wu, Xing -Gang; Wang, Sheng -Quan; Brodsky, Stanley J.

    2015-12-22

    A primary problem affecting perturbative quantum chromodynamic (pQCD) analyses is the lack of a method for setting the QCD running-coupling renormalization scale such that maximally precise fixed-order predictions for physical observables are obtained. The Principle of Maximum Conformality (PMC) eliminates the ambiguities associated with the conventional renormalization scale-setting procedure, yielding predictions that are independent of the choice of renormalization scheme. The QCD coupling scales and the effective number of quark flavors are set order-by-order in the pQCD series. The PMC has a solid theoretical foundation, satisfying the standard renormalization group invariance condition and all of the self-consistency conditions derived frommore » the renormalization group. The PMC scales at each order are obtained by shifting the arguments of the strong force coupling constant αs to eliminate all non-conformal {βi} terms in the pQCD series. The {βi} terms are determined from renormalization group equations without ambiguity. The correct behavior of the running coupling at each order and at each phase-space point can then be obtained. The PMC reduces in the N C → 0 Abelian limit to the Gell-Mann-Low method. In this brief report, we summarize the results of our recent application of the PMC to a number of collider processes, emphasizing the generality and applicability of this approach. A discussion of hadronic Z decays shows that, by applying the PMC, one can achieve accurate predictions for the total and separate decay widths at each order without scale ambiguities. We also show that, if one employs the PMC to determine the top-quark pair forward-backward asymmetry at the next-to-next-to-leading order level, one obtains a comprehensive, self-consistent pQCD explanation for the Tevatron measurements of the asymmetry. This accounts for the “increasing-decreasing” behavior observed by the D0 collaboration for increasing tt¯ invariant mass. At lower energies, the angular distributions of heavy quarks can be used to obtain a direct determination of the heavy quark potential. A discussion of the angular distributions of massive quarks and leptons is also presented, including the fermionic component of the two-loop corrections to the electromagnetic form factors. Furthermore, these results demonstrate that the application of the PMC systematically eliminates a major theoretical uncertainty for pQCD predictions, thus increasing collider sensitivity to possible new physics beyond the Standard Model.« less

  9. Concentrations of volatile organic compounds, carbon monoxide, carbon dioxide and particulate matter in buses on highways in Taiwan

    NASA Astrophysics Data System (ADS)

    Hsu, Der-Jen; Huang, Hsiao-Lin

    2009-12-01

    Although airborne pollutants in urban buses have been studied in many cities globally, long-distance buses running mainly on highways have not been addressed in this regard. This study investigates the levels of volatile organic compounds (VOCs), carbon monoxide (CO), carbon dioxide (CO 2) and particulate matter (PM) in the long-distance buses in Taiwan. Analytical results indicate that pollutants levels in long-distance buses are generally lower than those in urban buses. This finding is attributable to the driving speed and patterns of long-distance buses, as well as the meteorological and geographical features of the highway surroundings. The levels of benzene, toluene, ethylbenzene and xylene (BTEX) found in bus cabins exceed the proposed indoor VOC guidelines for aromatic compounds, and are likely attributable to the interior trim in the cabins. The overall average CO level is 2.3 ppm, with higher average level on local streets (2.9 ppm) than on highways (2.2 ppm). The average CO 2 level is 1493 ppm, which is higher than the guideline for non-industrial occupied settings. The average PM level in this study is lower than those in urban buses and IAQ guidelines set by Taiwan EPA. However, the average PM 10 and PM 2.5 is higher than the level set by WHO. Besides the probable causes mentioned above, fewer passenger movements and less particle re-suspension from bus floor might also cause the lower PM levels. Measurements of particle size distribution reveal that more than 75% of particles are in submicron and smaller sizes. These particles may come from the infiltration from the outdoor air. This study concludes that air exchange rates in long-distance buses should be increased in order to reduce CO 2 levels. Future research on long-distance buses should focus on the emission of VOCs from brand new buses, and the sources of submicron particles in bus cabins.

  10. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  11. Sequence stratigraphy and a revised sea-level curve for the Middle Devonian of eastern North America

    USGS Publications Warehouse

    Brett, Carlton E.; Baird, G.C.; Bartholomew, A.J.; DeSantis, M.K.; Ver Straeten, C.A.

    2011-01-01

    The well-exposed Middle Devonian rocks of the Appalachian foreland basin (Onondaga Formation; Hamilton Group, Tully Formation, and the Genesee Group of New York State) preserve one of the most detailed records of high-order sea-level oscillation cycles for this time period in the world. Detailed examination of coeval units in distal areas of the Appalachian Basin, as well as portions of the Michigan and Illinois basins, has revealed that the pattern of high-order sea-level oscillations documented in the New York-Pennsylvania section can be positively identified in all areas of eastern North America where coeval units are preserved. The persistence of the pattern of high-order sea-level cycles across such a wide geographic area suggests that these cycles are allocyclic in nature with primary control on deposition being eustatic sea-level oscillation, as opposed to autocylic controls, such as sediment supply, which would be more local in their manifestation. There is strong evidence from studies of cyclicity and spectral analysis that these cycles are also related to Milankovitch orbital variations, with the short and long-term eccentricity cycles (100. kyr and 405. kyr) being the dominant oscillations in many settings. Relative sea-level oscillations of tens of meters are likely and raise considerable issues about the driving mechanism, given that the Middle Devonian appears to record a greenhouse phase of Phanerozoic history. These new correlations lend strong support to a revised high-resolution sea-level oscillation curve for the Middle Devonian for the eastern portion of North America. Recognized third-order sequences are: Eif-1 lower Onondaga Formation, Eif-2: upper Onondaga and Union Springs formations; Eif-Giv: Oatka Creek Formation; Giv-1: Skaneateles, Giv-2: Ludlowville, Giv-3: lower Moscow, Giv-4: upper Moscow-lower Tully, and Giv-5: middle Tully-Geneseo formations. Thus, in contrast with the widely cited eustatic curve of Johnson et al. (1985), which recognizes just one major transgressive-regressive (T-R) cycle in the early-mid Givetian (If) prior to the major late Givetian Taghanic unconformity (IIa, upper Tully-Geneseo Shale), we recognize four T-R cycles: If (restricted), Ig, Ih, and Ii. We surmise that third-order sequences record eustatic sea-level fluctuations of tens of meters with periodicities of 0.8-2. myr, while their medial-scale (fourth-order) subdivisions record lesser variations primarily of 405. kyr duration (long-term eccentricity). This high-resolution record of sea-level change provides strong evidence for high-order eustatic cycles with probable Milankovitch periodicities, despite the fact that no direct evidence for Middle Devonian glacial sediments has been found to date. ?? 2010.

  12. On the statistics of biased tracers in the Effective Field Theory of Large Scale Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angulo, Raul; Fasiello, Matteo; Senatore, Leonardo

    2015-09-01

    With the completion of the Planck mission, in order to continue to gather cosmological information it has become crucial to understand the Large Scale Structures (LSS) of the universe to percent accuracy. The Effective Field Theory of LSS (EFTofLSS) is a novel theoretical framework that aims to develop an analytic understanding of LSS at long distances, where inhomogeneities are small. We further develop the description of biased tracers in the EFTofLSS to account for the effect of baryonic physics and primordial non-Gaussianities, finding that new bias coefficients are required. Then, restricting to dark matter with Gaussian initial conditions, we describemore » the prediction of the EFTofLSS for the one-loop halo-halo and halo-matter two-point functions, and for the tree-level halo-halo-halo, matter-halo-halo and matter-matter-halo three-point functions. Several new bias coefficients are needed in the EFTofLSS, even though their contribution at a given order can be degenerate and the same parameters contribute to multiple observables. We develop a method to reduce the number of biases to an irreducible basis, and find that, at the order at which we work, seven bias parameters are enough to describe this extremely rich set of statistics. We then compare with the output of an N-body simulation where the normalization parameter of the linear power spectrum is set to σ{sub 8} = 0.9. For the lowest mass bin, we find percent level agreement up to k≅ 0.3 h Mpc{sup −1} for the one-loop two-point functions, and up to k≅ 0.15 h Mpc{sup −1} for the tree-level three-point functions, with the k-reach decreasing with higher mass bins. This is consistent with the theoretical estimates, and suggests that the cosmological information in LSS amenable to analytical control is much more than previously believed.« less

  13. On the statistics of biased tracers in the Effective Field Theory of Large Scale Structures

    DOE PAGES

    Angulo, Raul; Fasiello, Matteo; Senatore, Leonardo; ...

    2015-09-09

    With the completion of the Planck mission, in order to continue to gather cosmological information it has become crucial to understand the Large Scale Structures (LSS) of the universe to percent accuracy. The Effective Field Theory of LSS (EFTofLSS) is a novel theoretical framework that aims to develop an analytic understanding of LSS at long distances, where inhomogeneities are small. We further develop the description of biased tracers in the EFTofLSS to account for the effect of baryonic physics and primordial non-Gaussianities, finding that new bias coefficients are required. Then, restricting to dark matter with Gaussian initial conditions, we describemore » the prediction of the EFTofLSS for the one-loop halo-halo and halo-matter two-point functions, and for the tree-level halo-halo-halo, matter-halo-halo and matter-matter-halo three-point functions. Several new bias coefficients are needed in the EFTofLSS, even though their contribution at a given order can be degenerate and the same parameters contribute to multiple observables. We develop a method to reduce the number of biases to an irreducible basis, and find that, at the order at which we work, seven bias parameters are enough to describe this extremely rich set of statistics. We then compare with the output of an N-body simulation where the normalization parameter of the linear power spectrum is set to σ 8 = 0.9. For the lowest mass bin, we find percent level agreement up to k ≃ 0.3 h Mpc –1 for the one-loop two-point functions, and up to k ≃ 0.15 h Mpc –1 for the tree-level three-point functions, with the k-reach decreasing with higher mass bins. In conclusion, this is consistent with the theoretical estimates, and suggests that the cosmological information in LSS amenable to analytical control is much more than previously believed.« less

  14. Multiprocessor sparse L/U decomposition with controlled fill-in

    NASA Technical Reports Server (NTRS)

    Alaghband, G.; Jordan, H. F.

    1985-01-01

    Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed.

  15. Estimating order statistics of network degrees

    NASA Astrophysics Data System (ADS)

    Chu, J.; Nadarajah, S.

    2018-01-01

    We model the order statistics of network degrees of big data sets by a range of generalised beta distributions. A three parameter beta distribution due to Libby and Novick (1982) is shown to give the best overall fit for at least four big data sets. The fit of this distribution is significantly better than the fit suggested by Olhede and Wolfe (2012) across the whole range of order statistics for all four data sets.

  16. Design and development of a personal alarm monitor for use by first responders

    NASA Astrophysics Data System (ADS)

    Ehntholt, Daniel J.; Louie, Alan S.; Marenchic, Ingrid G.; Forni, Ronald J.

    2004-03-01

    This paper describes the design and development of a small, portable alarm device that can be used by first responders to an emergency event to warn of the presence of low levels of a toxic nerve gas. The device consists of a rigid reusable portion and a consumable packet that is sensitive to the presence of acetylcholinesterase inhibitors such as the nerve gases Sarin or Soman. The sensitivity level of the alarm is set to be at initial physiological response at the meiosis level, orders of magnitude below lethal concentrations. The AChE enzyme used is specific for nerve-type toxins. A color development reaction is used to demonstrate continued activity of the enzyme over its twelve-hour operational cycle.

  17. Variable mixer propulsion cycle

    NASA Technical Reports Server (NTRS)

    Rundell, D. J.; Mchugh, D. P.; Foster, T.; Brown, R. H. (Inventor)

    1978-01-01

    A design technique, method and apparatus are delineated for controlling the bypass gas stream pressure and varying the bypass ratio of a mixed flow gas turbine engine in order to achieve improved performance. The disclosed embodiments each include a mixing device for combining the core and bypass gas streams. The variable area mixing device permits the static pressures of the core and bypass streams to be balanced prior to mixing at widely varying bypass stream pressure levels. The mixed flow gas turbine engine therefore operates efficiently over a wide range of bypass ratios and the dynamic pressure of the bypass stream is maintained at a level which will keep the engine inlet airflow matched to an optimum design level throughout a wide range of engine thrust settings.

  18. Exercise order affects the total training volume and the ratings of perceived exertion in response to a super-set resistance training session

    PubMed Central

    Balsamo, Sandor; Tibana, Ramires Alsamir; Nascimento, Dahan da Cunha; de Farias, Gleyverton Landim; Petruccelli, Zeno; de Santana, Frederico dos Santos; Martins, Otávio Vanni; de Aguiar, Fernando; Pereira, Guilherme Borges; de Souza, Jéssica Cardoso; Prestes, Jonato

    2012-01-01

    The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension) + hamstrings (leg curl) (QH) or hamstrings (leg curl) + quadriceps (leg extension) (HQ). Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02) with lower perceived exertion than the inverse order (P = 0.04). These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl order. PMID:22371654

  19. Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.

    2017-11-01

    Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.

  20. Three- and four-body nonadditivities in nucleic acid tetramers: a CCSD(T) study.

    PubMed

    Pitonák, M; Neogrády, P; Hobza, P

    2010-02-14

    Three- and four-body nonadditivities in the uracil tetramer (in DNA-like geometry) and the GC step (in crystal geometry) were investigated at various levels of the wave-function theory: HF, MP2, MP3, L-CCD, CCSD and CCSD(T). All of the calculations were performed using the 6-31G**(0.25,0.15) basis set, whereas the HF, MP2 and the MP3 nonadditivities were, for the sake of comparison, also determined with the much larger aug-cc-pVDZ basis set. The HF and MP2 levels do not provide reliable values for many-body terms, making it necessary to go beyond the MP2 level. The benchmark CCSD(T) three- and four-body nonadditivities are reasonably well reproduced at the MP3 level, and almost quantitative agreement is obtained (fortuitously) either on the L-CCD level or as an average of the MP3 and the CCSD results. Reliable values of many-body terms (especially their higher-order correlation contributions) are obtained already when the rather small 6-31G**(0.25,0.15) basis set is used. The four-body term is much smaller when compared to the three-body terms, but it is definitely not negligible, e.g. in the case of the GC step it represents about 16% of all of the three- and four-body terms. While investigating the geometry dependence of many-body terms for the GG step at the MP3/6-31G**(0.25,0.15) level, we found that it is necessary to include at least three-body terms in the determination of optimal geometry parameters.

  1. A Global Rapid Integrated Monitoring System for Water Cycle and Water Resource Assessment (Global-RIMS)

    NASA Technical Reports Server (NTRS)

    Roads, John; Voeroesmarty, Charles

    2005-01-01

    The main focus of our work was to solidify underlying data sets, the data processing tools and the modeling environment needed to perform a series of long-term global and regional hydrological simulations leading eventually to routine hydrometeorological predictions. A water and energy budget synthesis was developed for the Mississippi River Basin (Roads et al. 2003), in order to understand better what kinds of errors exist in current hydrometeorological data sets. This study is now being extended globally with a larger number of observations and model based data sets under the new NASA NEWS program. A global comparison of a number of precipitation data sets was subsequently carried out (Fekete et al. 2004) in which it was further shown that reanalysis precipitation has substantial problems, which subsequently led us to the development of a precipitation assimilation effort (Nunes and Roads 2005). We believe that with current levels of model skill in predicting precipitation that precipitation assimilation is necessary to get the appropriate land surface forcing.

  2. The patient’s anxiety before seeing a doctor and her/his hospital choice behavior in China

    PubMed Central

    2012-01-01

    Background The patient’s anxiety before seeing a doctor may influence her/his hospital choice behavior through various ways. In order to explore why high level hospitals were overused by patients and why low level hospitals were not fully used by patients in China, this study was set up to test whether and to what extent the patient’s anxiety before seeing a doctor influenced her/his hospital choice behavior in China. Methods This study commissioned a large-scale 2009–2010 national resident household survey (N=4,853) in China, and in this survey the Self-Rating Anxiety Scale (SAS) was employed to help patients assess their anxiety before seeing a doctor. Specified ordered probit models were established to analyze the survey dataset. Results When the patient had high level of anxiety before seeing a doctor, her/his level of anxiety could not only predict that she/he was more likely to choose the high level hospital, but also accurately predict which level of hospital she/he would choose; when the patient had low level of anxiety before seeing a doctor, her/his level of anxiety could only predict that she/he was more likely to choose the low level hospital, but it couldn’t clearly predict which level of hospital she/he would choose. Conclusion The patient with high level of anxiety had the strong consistent bias when she/he chose a hospital (she/he always preferred the high level hospital), while the patient with low level of anxiety didn’t have such consistent bias. PMID:23270526

  3. The SOBANE strategy for the management of risk, as applied to whole-body or hand-arm vibration.

    PubMed

    Malchaire, J; Piette, A

    2006-06-01

    The objective was to develop a coherent set of methods to be used effectively in industry to prevent and manage the risks associated with exposure to vibration, by coordinating the progressive intervention of the workers, their management, the occupational health and safety (OHS) professionals and the experts. The methods were developed separately for the exposure to whole-body and hand-arm vibration. The SOBANE strategy of risk prevention includes four levels of intervention: level 1, Screening; level 2, Observation; level 3, Analysis and; level 4, Expertise. The methods making it possible to apply this strategy were developed for 14 types of risk factors. The article presents the methods specific to the prevention of the risks associated with the exposure to vibration. The strategy is similar to those published for the risks associated with exposure to noise, heat and musculoskeletal disorders. It explicitly recognizes the qualifications of the workers and their management with regard to the work situation and shares the principle that measuring the exposure of the workers is not necessarily the first step in order to improve these situations. It attempts to optimize the recourse to the competences of the OHS professionals and the experts, in order to come more rapidly, effectively and economically to practical control measures.

  4. Online sea ice data platform: www.seaiceportal.de

    NASA Astrophysics Data System (ADS)

    Nicolaus, Marcel; Asseng, Jölund; Bartsch, Annekathrin; Bräuer, Benny; Fritzsch, Bernadette; Grosfeld, Klaus; Hendricks, Stefan; Hiller, Wolfgang; Heygster, Georg; Krumpen, Thomas; Melsheimer, Christian; Ricker, Robert; Treffeisen, Renate; Weigelt, Marietta; Nicolaus, Anja; Lemke, Peter

    2016-04-01

    There is an increasing public interest in sea ice information from both Polar Regions, which requires up-to-date background information and data sets at different levels for various target groups. In order to serve this interest and need, seaiceportal.de (originally: meereisportal.de) was developed as a comprehensive German knowledge platform on sea ice and its snow cover in the Arctic and Antarctic. It was launched in April 2013. Since then, the content and selection of data sets increased and the data portal received increasing attention, also from the international science community. Meanwhile, we are providing near-real time and archive data of many key parameters of sea ice and its snow cover. The data sets result from measurements acquired by various platforms as well as numerical simulations. Satellite observations of sea ice concentration, freeboard, thickness and drift are available as gridded data sets. Sea ice and snow temperatures and thickness as well as atmospheric parameters are available from autonomous platforms (buoys). Additional ship observations, ice station measurements, and mooring time series are compiled as data collections over the last decade. In parallel, we are continuously extending our meta-data and uncertainty information for all data sets. In addition to the data portal, seaiceportal.de provides general comprehensive background information on sea ice and snow as well as expert statements on recent observations and developments. This content is mostly in German in order to complement the various existing international sites for the German speaking public. We will present the portal, its content and function, but we are also asking for direct user feedback.

  5. Deterministic nonlinear phase gates induced by a single qubit

    NASA Astrophysics Data System (ADS)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  6. Real-time monitoring of the human alertness level

    NASA Astrophysics Data System (ADS)

    Alvarez, Robin; del Pozo, Francisco; Hernando, Elena; Gomez, Eduardo; Jimenez, Antonio

    2003-04-01

    Many accidents are associated with a driver or machine operator's alertness level. Drowsiness often develops as a result of repetitive or monotonous tasks, uninterrupted by external stimuli. In order to enhance safety levels, it would be most desirable to monitor the individual's level of attention. In this work, changes in the power spectrum of the electroencephalographic signal (EEG) are associated with the subject's level of attention. This study reports on the initial research carried out in order to answer the following important questions: (i) Does a trend exist in the shape of the power spectrum, which will indicate the state of a subject's alertness state (drowsy, relaxed or alert)? (ii) What points on the cortex are most suitable to detect drowsiness and/or high alertness? (iii) What parameters in the power spectrum are most suitable to establish a workable alertness classification in human subjects? In this work, we answer these questions and combine power spectrum estimation and artificial neural network techniques to create a non-invasive and real - time system able to classify EEG into three levels of attention: High, Relaxed and Drowsiness. The classification is made every 10 seconds o more, a suitable time span for giving an alarm signal if the individual is with insufficient level of alertness. This time span is set by the user. The system was tested on twenty subjects. High and relaxed attention levels were measured in randomise hours of the day and drowsiness attention level was measured in the morning after one night of sleep deprivation.

  7. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  8. Difference magnitude is not measured by discrimination steps for order of point patterns.

    PubMed

    Protonotarios, Emmanouil D; Johnston, Alan; Griffin, Lewis D

    2016-07-01

    We have shown in previous work that the perception of order in point patterns is consistent with an interval scale structure (Protonotarios, Baum, Johnston, Hunter, & Griffin, 2014). The psychophysical scaling method used relies on the confusion between stimuli with similar levels of order, and the resulting discrimination scale is expressed in just-noticeable differences (jnds). As with other perceptual dimensions, an interesting question is whether suprathreshold (perceptual) differences are consistent with distances between stimuli on the discrimination scale. To test that, we collected discrimination data, and data based on comparison of perceptual differences. The stimuli were jittered square lattices of dots, covering the range from total disorder (Poisson) to perfect order (square lattice), roughly equally spaced on the discrimination scale. Observers picked the most ordered pattern from a pair, and the pair of patterns with the greatest difference in order from two pairs. Although the judgments of perceptual difference were found to be consistent with an interval scale, like the discrimination judgments, no common interval scale that could predict both sets of data was possible. In particular, the midpattern of the perceptual scale is 11 jnds away from the ordered end, and 5 jnds from the disordered end of the discrimination scale.

  9. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    PubMed

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most prevalent problems in community-living older adults.

  10. Coseismic slip distribution of the 1923 Kanto earthquake, Japan

    USGS Publications Warehouse

    Pollitz, F.F.; Nyst, M.; Nishimura, T.; Thatcher, W.

    2005-01-01

    The slip distribution associated with the 1923 M = 7.9 Kanto, Japan, earthquake is reexamined in light of new data and modeling. We utilize a combination of first-order triangulation, second-order triangulation, and leveling data in order to constrain the coseismic deformation. The second-order triangulation data, which have not been utilized in previous studies of 1923 coseismic deformation, are associated with only slightly smaller errors than the first-order triangulation data and expand the available triangulation data set by about a factor of 10. Interpretation of these data in terms of uniform-slip models in a companion study by Nyst et al. shows that a model involving uniform coseismic slip on two distinct rupture planes explains the data very well and matches or exceeds the fit obtained by previous studies, even one which involved distributed slip. Using the geometry of the Nyst et al. two-plane slip model, we perform inversions of the same geodetic data set for distributed slip. Our preferred model of distributed slip on the Philippine Sea plate interface has a moment magnitude of 7.86. We find slip maxima of ???8-9 m beneath Odawara and ???7-8 m beneath the Miura peninsula, with a roughly 2:1 ratio of strike-slip to dip-slip motion, in agreement with a previous study. However, the Miura slip maximum is imaged as a more broadly extended feature in our study, with the high-slip region continuing from the Miura peninsula to the southern Boso peninsula region. The second-order triangulation data provide good evidence for ???3 m right-lateral strike slip on a 35-km-long splay structure occupying the volume between the upper surface of the descending Philippine Sea plate and the southern Boso peninsula. Copyright 2005 by the American Geophysical Union.

  11. A computer program to generate equations of motion matrices, L217 (EOM). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Clemmons, R. E.

    1979-01-01

    The equations of motion program L217 formulates the matrix coefficients for a set of second order linear differential equations that describe the motion of an airplane relative to its level equilibrium flight condition. Aerodynamic data from FLEXSTAB or Doublet Lattice (L216) programs can be used to derive the equations for quasi-steady or full unsteady aerodynamics. The data manipulation and the matrix coefficient formulation are described.

  12. Man-portable Vector Time Domain EMI Sensor and Discrimination Processing

    DTIC Science & Technology

    2012-04-16

    points of each winding are coincident. Each receiver coil is wound helically on a set of 10 grooves etched on the surface of the cube; 36- gauge wire...subset of the data, and inject various levels of noise into the position of the MPV in order to gauge the robustness of the discrimination results...as possible. The quantity φ also provides a metric to gauge goodness of fit, being essentially an average percent error: Benjamin Barrowes, Kevin

  13. Explorations of Individual Differences Relevant to High Level Skill.

    DTIC Science & Technology

    1981-12-15

    discrete pairs that were responded to in a set order. Many real life tasks are ongoing and the performer must interweave them in some manner. It is... life skill that might relate to the predictors? 18. Individual Differences in the Rate of Repetitive Activity Recently we have begun to investigate...22211 Ale..a.drla. VA 22333 1 Dr. Sesceinve Maddad Prcgrm Manager 1 DR. 8.1. Si.AFKOSIY I Dr. Renter Pletcher Life Sriencea Directorate SCIEMTIPIr ADVISOR

  14. Performance characterization of a low power hydrazine arcjet

    NASA Technical Reports Server (NTRS)

    Knowles, S. C.; Smith, W. W.; Curran, F. M.; Haag, T. W.

    1987-01-01

    Hydrazine arcjets, which offer substantial performance advantages over alternatives in geosynchronous satellite stationkeeping applications, have undergone startup, materials compatibility, lifetime, and power conditioning unit design issues. Devices in the 1000-3000 W output range have been characterized for several different electrode configurations. Constrictor length and diameter, electrode gap setting, and vortex strength have been parametrically studied in order to ascertain the influence of each on specific impulse and efficiency; specific impulse levels greater than 700 sec have been achieved.

  15. The Total Force: Cultural Considerations for the Future of the Total Force Integration

    DTIC Science & Technology

    2017-03-31

    force. In order to achieve a new TFI culture the Air Force must make a plan, and provide the education , tools, and road map; to cultivate a new set of...Force leadership; and taught holistically at all levels, ranks; and in every professional military education institution. 3 Part I...complex than transferring assets and improving individual units. Cohen’s principles spoke to a need for a cultural transformation . Today’s TFI concept

  16. A system to improve medication safety in the setting of acute kidney injury: initial provider response.

    PubMed

    McCoy, Allison B; McCoy, Allison Beck; Peterson, Josh F; Gadd, Cynthia S; Gadd, Cindy; Danciu, Ioana; Waitman, Lemuel R

    2008-11-06

    Clinical decision support systems can decrease common errors related to inappropriate or excessive dosing for nephrotoxic or renally cleared drugs. We developed a comprehensive medication safety intervention with varying levels of workflow intrusiveness within computerized provider order entry to continuously monitor for and alert providers about early-onset acute kidney injury. Initial provider response to the interventions shows potential success in improving medication safety and suggests future enhancements to increase effectiveness.

  17. Develop Measures of Effectiveness and Deployment Optimization Rules for Networked Ground Micro-Sensors

    DTIC Science & Technology

    2001-05-01

    types and total #) Ø Control of Sensors ( Scheduling ) Ø Coverage (Time & Area) Uncontrollable Inputs ØWeather Ø Atmospheric Effects Ø Equipment...are widely scattered and used to cue or wakeup other higher-level sensors. Trip line sensors consist of some combination of acoustic, seismic and...Employ a mix if different sensor types in order to increase detection probability 4.4.4.2 Minimize Battery Power • Set schedule turn on and off

  18. Benchmarking density functional tight binding models for barrier heights and reaction energetics of organic molecules.

    PubMed

    Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus

    2017-09-30

    Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Fluorescence multi-scale endoscopy and its applications in the study and diagnosis of gastro-intestinal diseases: set-up design and software implementation

    NASA Astrophysics Data System (ADS)

    Gómez-García, Pablo Aurelio; Arranz, Alicia; Fresno, Manuel; Desco, Manuel; Mahmood, Umar; Vaquero, Juan José; Ripoll, Jorge

    2015-06-01

    Endoscopy is frequently used in the diagnosis of several gastro-intestinal pathologies as Crohn disease, ulcerative colitis or colorectal cancer. It has great potential as a non-invasive screening technique capable of detecting suspicious alterations in the intestinal mucosa, such as inflammatory processes. However, these early lesions usually cannot be detected with conventional endoscopes, due to lack of cellular detail and the absence of specific markers. Due to this lack of specificity, the development of new endoscopy technologies, which are able to show microscopic changes in the mucosa structure, are necessary. We here present a confocal endomicroscope, which in combination with a wide field fluorescence endoscope offers fast and specific macroscopic information through the use of activatable probes and a detailed analysis at cellular level of the possible altered tissue areas. This multi-modal and multi-scale imaging module, compatible with commercial endoscopes, combines near-infrared fluorescence (NIRF) measurements (enabling specific imaging of markers of disease and prognosis) and confocal endomicroscopy making use of a fiber bundle, providing a cellular level resolution. The system will be used in animal models exhibiting gastro-intestinal diseases in order to analyze the use of potential diagnostic markers in colorectal cancer. In this work, we present in detail the set-up design and the software implementation in order to obtain simultaneous RGB/NIRF measurements and short confocal scanning times.

  20. Bioaccumulation of heavy metals in crop plants grown near Almeda Textile Factory, Adwa, Ethiopia.

    PubMed

    Gitet, Hintsa; Hilawie, Masho; Muuz, Mehari; Weldegebriel, Yirgaalem; Gebremichael, Dawit; Gebremedhin, Desta

    2016-09-01

    The contents of heavy metals cadmium (Cd), cobalt (Co), chromium (Cr), copper (Cu), manganese (Mn), nickel (Ni), lead (Pb), and zinc (Zn) present in water (wastewater and wetland), soils, and food crops collected from the vicinity of Almeda Textile Factory were quantified using Flame Atomic Absorption Spectrometer (FAAS) in order to assess the environmental impact of the textile factory. The contents of heavy metals determined in the wastewater were found below the recommended limit set by WHO and United States Environmental Protection Agency (US EPA) except for Cr, which was found slightly higher than WHO permissible limit. Besides, the contents of the heavy metals determined in soils were below the permissible level of FAO/WHO and Canada maximum allowable limits. Moreover, only the concentrations of Cd and Pb were found above the permissible level set by FAO/WHO in the crop plants studied. Generally, the mean concentrations of heavy metals in the plants were in the decreasing order of: Mn > Zn > Cu > Pb > Ni > Co > Cr > Cd. Nevertheless, higher bioconcentration factor (BCF) was found for Cd (0.108-1.156) followed by Zn (0.081-0.499). In conclusion, comparison of heavy metal concentrations with the permissible limits in all collected sample types i.e. water, soil, and crop plants did not show significant pollution from the factory.

  1. Correlates of Social Functioning in Autism Spectrum Disorder: The Role of Social Cognition

    PubMed Central

    Bishop-Fitzpatrick, Lauren; Mazefsky, Carla A.; Eack, Shaun M.; Minshew, Nancy J.

    2017-01-01

    Background Individuals with autism spectrum disorder (ASD) experience marked challenges with social function by definition, but few modifiable predictors of social functioning in ASD have been identified in extant research. This study hypothesized that deficits in social cognition and motor function may help to explain poor social functioning in individuals with ASD. Method Cross-sectional data from 108 individuals with ASD and without intellectual disability ages 9 through 27.5 were used to assess the relationship between social cognition and motor function, and social functioning. Results Results of hierarchical multiple regression analyses revealed that greater social cognition, but not motor function, was significantly associated with better social functioning when controlling for sex, age, and intelligence quotient. Post-hoc analyses revealed that, better performance on second-order false belief tasks was associated with higher levels of socially adaptive behavior and lower levels of social problems. Conclusions Our findings support the development and testing of interventions that target social cognition in order to improve social functioning in individuals with ASD. Interventions that teach generalizable skills to help people with ASD better understand social situations and develop competency in advanced perspective taking have the potential to create more durable change because their effects can be applied to a wide and varied set of situations and not simply a prescribed set of rehearsed situations. PMID:28839456

  2. Aircraft Pitch Control With Fixed Order LQ Compensators

    NASA Technical Reports Server (NTRS)

    Green, James; Ashokkumar, C. R.; Homaifar, Abdollah

    1997-01-01

    This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.

  3. Aircraft Pitch Control with Fixed Order LQ Compensators

    NASA Technical Reports Server (NTRS)

    Green, James; Ashokkumar, Cr.; Homaifar, A.

    1997-01-01

    This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.

  4. Too Exhausted to Perform at the Highest Level? On the Importance of Self-control Strength in Educational Settings

    PubMed Central

    Englert, Chris; Zavery, Alafia; Bertrams, Alex

    2017-01-01

    In order to perform at the highest level in educational settings (e.g., students in testing situations), individuals often have to control their impulses or desires (e.g., to study for an upcoming test or to prepare a course instead of spending time with the peer group). Previous research suggests that the ability to exert self-control is an important predictor of performance and behavior in educational contexts. According to the strength model, all self-control acts are based on one global energy pool whose capacity is assumed to be limited. After having performed a first act of self-control, this resource can become temporarily depleted which negatively affects subsequent self-control. In such a state of ego depletion, individuals tend to display impaired concentration and academic performance, fail to meet academic deadlines, or even disengage from their duties. In this mini-review, we report recent studies on ego depletion which have focused on children as well as adults in educational settings, derive practical implications for how to improve self-control strength in the realm of education and instruction, and discuss limitations regarding the assumptions of the strength model of self-control. PMID:28790963

  5. Too Exhausted to Perform at the Highest Level? On the Importance of Self-control Strength in Educational Settings.

    PubMed

    Englert, Chris; Zavery, Alafia; Bertrams, Alex

    2017-01-01

    In order to perform at the highest level in educational settings (e.g., students in testing situations), individuals often have to control their impulses or desires (e.g., to study for an upcoming test or to prepare a course instead of spending time with the peer group). Previous research suggests that the ability to exert self-control is an important predictor of performance and behavior in educational contexts. According to the strength model, all self-control acts are based on one global energy pool whose capacity is assumed to be limited. After having performed a first act of self-control, this resource can become temporarily depleted which negatively affects subsequent self-control. In such a state of ego depletion, individuals tend to display impaired concentration and academic performance, fail to meet academic deadlines, or even disengage from their duties. In this mini-review, we report recent studies on ego depletion which have focused on children as well as adults in educational settings, derive practical implications for how to improve self-control strength in the realm of education and instruction, and discuss limitations regarding the assumptions of the strength model of self-control.

  6. Evaluation of the appropriate use of a CIWA-Ar alcohol withdrawal protocol in the general hospital setting.

    PubMed

    Eloma, Amanda S; Tucciarone, Jason M; Hayes, Edmund M; Bronson, Brian D

    2018-01-01

    The Clinical Institute Withdrawal Assessment-Alcohol, Revised (CIWA-Ar) is an assessment tool used to quantify alcohol withdrawal syndrome (AWS) severity and inform benzodiazepine treatment for alcohol withdrawal. To evaluate the prescribing patterns and appropriate use of the CIWA-Ar protocol in a general hospital setting, as determined by the presence or absence of documented AWS risk factors, patients' ability to communicate, and provider awareness of the CIWA-Ar order. This retrospective chart review included 118 encounters of hospitalized patients placed on a CIWA-Ar protocol during one year. The following data were collected for each encounter: patient demographics, admitting diagnosis, ability to communicate, and admission blood alcohol level; and medical specialty of the clinician ordering CIWA-Ar, documentation of the presence or absence of established AWS risk factors, specific parameters of the protocol ordered, service admitted to, provider documentation of awareness of the active protocol within 48 h of initial order, total benzodiazepine dose equivalents administered and associated adverse events. 57% of patients who started on a CIWA-Ar protocol had either zero or one documented risk factor for AWS (19% and 38% respectively). 20% had no documentation of recent alcohol use. 14% were unable to communicate. 19% of medical records lacked documentation of provider awareness of the ordered protocol. Benzodiazepine associated adverse events were documented in 15% of encounters. The judicious use of CIWA-Ar protocols in general hospitals requires mechanisms to ensure assessment of validated alcohol withdrawal risk factors, exclusion of patients who cannot communicate, and continuity of care during transitions.

  7. The a(4) Scheme-A High Order Neutrally Stable CESE Solver

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2009-01-01

    The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a nondissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new high order (4-5th order) and neutrally stable CESE solver of a 1D advection equation with a constant advection speed a. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and two points at the lower time level. Because it is associated with four independent mesh variables (the numerical analogues of the dependent variable and its first, second, and third-order spatial derivatives) and four equations per mesh point, the new scheme is referred to as the a(4) scheme. As in the case of other similar CESE neutrally stable solvers, the a(4) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. Except for a singular case, these forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove the a(4) scheme must be neutrally stable when it is stable. Numerically, it has been established that the scheme is stable if the value of the Courant number is less than 1/3

  8. Work load and management in the delivery room: changing the direction of healthcare policy.

    PubMed

    Sfregola, Gianfranco; Laganà, Antonio Simone; Granese, Roberta; Sfregola, Pamela; Lopinto, Angela; Triolo, Onofrio

    2017-02-01

    Nurse staffing, increased workload and unstable nursing unit environments are linked to negative patient outcomes including falls and medication errors on medical/surgical units. Considering this evidence, the aim of our study was to overview midwives' workload and work setting. We created a questionnaire and performed an online survey. We obtained information about the type and level of hospital, workload, the use of standardised procedures, reporting of sentinel and 'near-miss' events. We reported a severe understaffing in midwives' work settings and important underuse of standard protocols according to the international guidelines, especially in the South of Italy. Based on our results, we strongly suggest a change of direction of healthcare policy, oriented to increase the number of employed midwives, in order to let them fulfil their duties according to the international guidelines (especially one-to-one care). On the other hand, we encourage the adoption of standardised protocols in each work setting.

  9. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  10. Compilation of regional ground water monitoring data to investigate 60 years of ground water dynamics in New England

    NASA Astrophysics Data System (ADS)

    Boutt, D. F.; Weider, K. M.

    2010-12-01

    Theory suggests that ground water systems at shallow depths are sensitive to climate system dynamics but respond at differing rates due to primarily hydrogeologic characteristics of the aquifer. These rates are presumably to a first order controlled by the transmissivity and hydrogeologic settings of aquifer systems. Regional scale modeling and understanding of the impact of this behavior is complicated by the fact that aquifer systems in glaciated regions of the North American continent often possess high degrees of heterogeneity as well as disparate hydraulic connections between aquifer systems. In order to investigate these relationships we present the results of a regional compilation of groundwater hydraulic head data across the New England states together with corresponding atmospheric (precipitation and temperature) and streamflow data for a 60 year period (1950-2010). Ground water trends are calculated as normalized anomalies, and analyzed with respect to regional compiled precipitation, temperature, and streamflow. Anomalies in ground water levels are analyzed together with hydrogeologic variables such as aquifer thickness, topographic setting, and distance from coast. The time-series display decadal patterns with ground water levels being highly variable and lagging that of precipitation and streamflow pointing to site specific and non-linear response to changes in climate. Sites with deeper water tables respond slower and with larger anomalies compared to shallow water table sites. Tills consistently respond quicker and have larger anomalies compared to outwash and stratified glacial deposits. The data set suggests that while regional patterns in ground water table response are internally consistent, the magnitude and timing of the response to wet or dry periods is extremely sensitive to hydrogeologic characteristics of the host aquifer.

  11. Electronic coupling matrix elements from charge constrained density functional theory calculations using a plane wave basis set

    NASA Astrophysics Data System (ADS)

    Oberhofer, Harald; Blumberger, Jochen

    2010-12-01

    We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.

  12. A method for monitoring intensity during aquatic resistance exercises.

    PubMed

    Colado, Juan C; Tella, Victor; Triplett, N Travis

    2008-11-01

    The aims of this study were (i) to check whether monitoring of both the rhythm of execution and the perceived effort is a valid tool for reproducing the same intensity of effort in different sets of the same aquatic resistance exercise (ARE) and (ii) to assess whether this method allows the ARE to be put at the same intensity level as its equivalent carried out on dry land. Four healthy trained young men performed horizontal shoulder abduction and adduction (HSAb/Ad) movements in water and on dry land. Muscle activation was recorded using surface electromyography of 1 stabilizer and several agonist muscles. Before the final tests, the ARE movement cadence was established individually following a rhythmic digitalized sequence of beats to define the alternate HSAb/Ad movements. This cadence allowed the subject to perform 15 repetitions at a perceived exertion of 9-10 using Hydro-Tone Bells. After that, each subject performed 2 nonconsecutive ARE sets. The dry land exercises (1 set of HSAb and 1 set of HSAd) were performed using a dual adjustable pulley cable motion machine, with the previous selection of weights that allowed the same movement cadence to be maintained and the completion of the same repetitions in each of the sets as with the ARE. The average normalized data were compared for the exercises in order to determine possible differences in muscle activity. The results show the validity of this method for reproducing the intensity of effort in different sets of the same ARE, but is not valid for matching the same intensity level as kinematically similar land-based exercises.

  13. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  14. Chemometric analysis of soil pollution data using the Tucker N-way method.

    PubMed

    Stanimirova, I; Zehl, K; Massart, D L; Vander Heyden, Y; Einax, J W

    2006-06-01

    N-way methods, particularly the Tucker method, are often the methods of choice when analyzing data sets arranged in three- (or higher) way arrays, which is the case for most environmental data sets. In the future, applying N-way methods will become an increasingly popular way to uncover hidden information in complex data sets. The reason for this is that classical two-way approaches such as principal component analysis are not as good at revealing the complex relationships present in data sets. This study describes in detail the application of a chemometric N-way approach, namely the Tucker method, in order to evaluate the level of pollution in soil from a contaminated site. The analyzed soil data set was five-way in nature. The samples were collected at different depths (way 1) from two locations (way 2) and the levels of thirteen metals (way 3) were analyzed using a four-step-sequential extraction procedure (way 4), allowing detailed information to be obtained about the bioavailability and activity of the different binding forms of the metals. Furthermore, the measurements were performed under two conditions (way 5), inert and non-inert. The preferred Tucker model of definite complexity showed that there was no significant difference in measurements analyzed under inert or non-inert conditions. It also allowed two depth horizons, characterized by different accumulation pathways, to be distinguished, and it allowed the relationships between chemical elements and their biological activities and mobilities in the soil to be described in detail.

  15. Memory and long-range correlations in chess games

    NASA Astrophysics Data System (ADS)

    Schaigorodsky, Ana L.; Perotti, Juan I.; Billoni, Orlando V.

    2014-01-01

    In this paper we report the existence of long-range memory in the opening moves of a chronologically ordered set of chess games using an extensive chess database. We used two mapping rules to build discrete time series and analyzed them using two methods for detecting long-range correlations; rescaled range analysis and detrended fluctuation analysis. We found that long-range memory is related to the level of the players. When the database is filtered according to player levels we found differences in the persistence of the different subsets. For high level players, correlations are stronger at long time scales; whereas in intermediate and low level players they reach the maximum value at shorter time scales. This can be interpreted as a signature of the different strategies used by players with different levels of expertise. These results are robust against the assignation rules and the method employed in the analysis of the time series.

  16. Advanced development of atmospheric models. [SEASAT Program support

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.; Langland, R. A.; Stephens, P. L.; Welleck, R. E.; Wolff, P. M.

    1979-01-01

    A set of atmospheric analysis and prediction models was developed in support of the SEASAT Program existing objective analysis models which utilize a 125x125 polar stereographic grid of the Northern Hemisphere, which were modified in order to incorporate and assess the impact of (real or simulated) satellite data in the analysis of a two-day meteorological scenario in January 1979. Program/procedural changes included: (1) a provision to utilize winds in the sea level pressure and multi-level height analyses (1000-100 MBS); (2) The capability to perform a pre-analysis at two control levels (1000 MBS and 250 MBS); (3) a greater degree of wind- and mass-field coupling, especially at these controls levels; (4) an improved facility to bogus the analyses based on results of the preanalysis; and (5) a provision to utilize (SIRS) satellite thickness values and cloud motion vectors in the multi-level height analysis.

  17. "This Is How We Work Here": Informal Logic and Social Order in Primary Health Care Services in Mexico City.

    PubMed

    Saavedra, Nayelhi Itandehui; Berenzon, Shoshana; Galván, Jorge

    2017-07-01

    People who work in health care facilities participate in a shared set of tacit agreements, attitudes, habits, and behaviors that contribute to the functioning of those institutions, but that can also cause conflict. This phenomenon has been addressed tangentially in the study of bureaucratic practices in governmental agencies, but it has not been carefully explored in the specific context of public health care centers. To this end, we analyzed a series of encounters among staff and patients, as well as the situations surrounding the services offered, in public primary care health centers in Mexico City, based on Erving Goffman's concepts of social order, encounter, and situation, and on the concepts of formal and informal logic. In a descriptive study over the course of 2 years, we carried out systematic observations in 19 health centers and conducted interviews with medical, technical, and administrative staff, and psychologists, social workers, and patients. We recorded these observations in field notes and performed reflexive analysis with readings on three different levels. Interviews were recorded, transcribed, and analyzed through identification of thematic categories and subcategories. Information related to encounters and situations from field notes and interviews was selected to triangulate the materials. We found the social order prevailing among staff to be based on a combination of status markers, such as educational level, seniority, and employee versus contractor status, which define the distribution of workloads, material resources, and space. Although this system generates conflicts, it also contributes to the smooth functioning of the health centers. The daily encounters and situations in all of these health centers allow for a set of informal practices that provide a temporary resolution of the contradictions posed by the institution for its workers.

  18. [Current status on management and needs related to education and training programs set for new employees at the provincial Centers for Disease Control and Prevention, in China].

    PubMed

    Ma, J; Meng, X D; Luo, H M; Zhou, H C; Qu, S L; Liu, X T; Dai, Z

    2016-06-01

    In order to understand the current management status on education/training and needs for training among new employees working at the provincial CDC in China during 2012-2014, so as to provide basis for setting up related programs at the CDC levels. Based on data gathered through questionnaire surveys run by CDCs from 32 provincial and 5 specifically-designated cities, microsoft excel was used to analyze the current status on management of education and training, for new employees. There were 156 management staff members working on education and training programs in 36 CDCs, with 70% of them having received intermediate or higher levels of education. Large differences were seen on equipment of training hardware in different regions. There were 1 214 teaching staff with 66 percent in the fields or related professional areas on public health, in 2014. 5084 new employees conducted pre/post training programs, from 2012 to 2014 with funding as 750 thousand RMB Yuan. 99.5% of the new employees expressed the needs for further training while. 74% of the new staff members expecting a 2-5 day training program to be implemented. 79% of the new staff members claimed that practice as the most appropriate method for training. Institutional programs set for education and training at the CDCs need to be clarified, with management team organized. It is important to provide more financial support on both hardware, software and human resources related to training programs which are set for new stuff members at all levels of CDCs.

  19. Olson Order of Quantum Observables

    NASA Astrophysics Data System (ADS)

    Dvurečenskij, Anatolij

    2016-11-01

    M.P. Olson, Proc. Am. Math. Soc. 28, 537-544 (1971) showed that the system of effect operators of the Hilbert space can be ordered by the so-called spectral order such that the system of effect operators is a complete lattice. Using his ideas, we introduce a partial order, called the Olson order, on the set of bounded observables of a complete lattice effect algebra. We show that the set of bounded observables is a Dedekind complete lattice.

  20. An overview of the NASA Langley Atmospheric Data Center: Online tools to effectively disseminate Earth science data products

    NASA Astrophysics Data System (ADS)

    Parker, L.; Dye, R. A.; Perez, J.; Rinsland, P.

    2012-12-01

    Over the past decade the Atmospheric Science Data Center (ASDC) at NASA Langley Research Center has archived and distributed a variety of satellite mission and aircraft campaign data sets. These datasets posed unique challenges to the user community at large due to the sheer volume and variety of the data and the lack of intuitive features in the order tools available to the investigator. Some of these data sets also lack sufficient metadata to provide rudimentary data discovery. To meet the needs of emerging users, the ASDC addressed issues in data discovery and delivery through the use of standards in data and access methods, and distribution through appropriate portals. The ASDC is currently undergoing a refresh of its webpages and Ordering Tools that will leverage updated collection level metadata in an effort to enhance the user experience. The ASDC is now providing search and subset capability to key mission satellite data sets. The ASDC has collaborated with Science Teams to accommodate prospective science users in the climate and modeling communities. The ASDC is using a common framework that enables more rapid development and deployment of search and subset tools that provide enhanced access features for the user community. Features of the Search and Subset web application enables a more sophisticated approach to selecting and ordering data subsets by parameter, date, time, and geographic area. The ASDC has also applied key practices from satellite missions to the multi-campaign aircraft missions executed for Earth Venture-1 and MEaSUReS

  1. Achieving Work-Life Balance in the National Collegiate Athletic Association Division I Setting, Part II: Perspectives From Head Athletic Trainers

    PubMed Central

    Goodman, Ashley; Mazerolle, Stephanie M.; Pitney, William A.

    2015-01-01

    Context: Work-life balance has been examined at the collegiate level from multiple perspectives except for the athletic trainer (AT) serving in a managerial or leadership role. Objective: To investigate challenges and strategies used in achieving work-life balance from the perspective of the head AT at a National Collegiate Athletic Association Division I university. Design: Qualitative study. Setting: Web-based management system. Patients or Other Participants: A total of 18 head ATs (13 men, 5 women; age = 44 ± 8 years, athletic training experience = 22 ± 7 years) volunteered. Data Collection and Analysis: Participants journaled their thoughts and experiences in response to a series of questions. To establish data credibility, we included multiple-analyst triangulation, stakeholder checks, and peer review. We used a general inductive approach to analyze the data. Results: Two higher-order themes emerged from our analysis of the data: organizational challenges and work-life balance strategies. The organizational challenges theme contained 2 lower-order themes: lack of autonomy and role demands. The work-life balance strategies theme contained 3 lower-order themes: prioritization of commitments, strategic boundary setting, and work-family integration. Conclusions: Head ATs are susceptible to experiencing work-life imbalance just as ATs in nonsupervisory roles are. Although not avoidable, the causes are manageable. Head ATs are encouraged to prioritize their personal time, make efforts to spend time away from their demanding positions, and reduce the number of additional responsibilities that can impede time available to spend away from work. PMID:25098746

  2. Interior and exterior fuselage noise measured on NASA's C-8a augmentor wing jet-STOL research aircraft

    NASA Technical Reports Server (NTRS)

    Shovlin, M. D.

    1977-01-01

    Interior and exterior fuselage noise levels were measured on NASA's C-8A Augmentor Wing Jet-STOL Research Aircraft in order to provide design information for the Quiet Short-Haul Research Aircraft (QSRA), which will use a modified C-8A fuselage. The noise field was mapped by 11 microphones located internally and externally in three areas: mid-fuselage, aft fuselage, and on the flight deck. Noise levels were recorded at four power settings varying from takeoff to flight idle and were plotted in one-third octave band spectra. The overall sound pressure levels of the external noise field were compared to previous tests and found to correlate well with engine primary thrust levels. Fuselage values were 145 + or - 3 dB over the aircraft's normal STOL operating range.

  3. Estimations of expectedness and potential surprise in possibility theory

    NASA Technical Reports Server (NTRS)

    Prade, Henri; Yager, Ronald R.

    1992-01-01

    This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.

  4. The Pariser-Parr-Pople model for trans-polyenes. I. Ab initio and semiempirical study of the bond alternation in trans-butadiene

    NASA Astrophysics Data System (ADS)

    Förner, Wolfgang

    1992-03-01

    Ab initio investigations of the bond alternation in butadiene are presented. The atomic basis sets applied range from minimal to split valence plus polarization quality. With the latter one the Hartree-Fock limit for the bond alternation is reached. Correlation is considered on Møller-Plesset many-body perturbation theory of second order (MP2), linear coupled cluster doubles (L-CCD) and coupled cluster doubles (CCD) level. For the smaller basis sets it is shown that for the bond alternation π-π correlations are essential while the effects of σ-σ and σ-π correlations are, though large, nearly independent of bond alternation. On MP2 level the variation of σ-π correlation with bond alternation is surprisingly large. This is discussed as an artefact of MP2. Comparative Su-Schrieffer-Heeger (SSH) and Pariser-Parr-Pople (PPP) calculations show that these models in their usual parametrizations cannot reproduce the ab initio results.

  5. Calculations for energies, transition rates, and lifetimes in Al-like Kr XXIV

    NASA Astrophysics Data System (ADS)

    Zhang, C. Y.; Si, R.; Liu, Y. W.; Yao, K.; Wang, K.; Guo, X. L.; Li, S.; Chen, C. Y.

    2018-05-01

    Using the second-order many-body perturbation theory (MBPT) method, a complete and accurate data set of excitation energies, lifetimes, wavelengths, and electric dipole (E1), magnetic dipole (M1), electric quadrupole (E2), and magnetic quadrupole (M2) line strengths, transition rates, and oscillator strengths for the lowest 880 levels arising from the 3l3 (0 ≤ l ≤ 2), 3l2 4l‧ (0 ≤ l ≤ 2, 0 ≤l‧ ≤ 3), 3s2 5 l (0 ≤ l ≤ 4), 3p2 5 l (0 ≤ l ≤ 1), and 3s3p5 l (0 ≤ l ≤ 4) configurations in Al-like Kr XXIV is provided. Comparisons are made with available experimental and theoretical results. Our calculated energies are expected to be accurate enough to facilitate identifications of observed lines involving the n = 4 , 5 levels. The complete data set is also useful for modeling and diagnosing fusion plasma.

  6. Simulation of void formation in interconnect lines

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Alireza; Heitzinger, Clemens; Puchner, Helmut; Badrieh, Fuad; Selberherr, Siegfried

    2003-04-01

    The predictive simulation of the formation of voids in interconnect lines is important for improving capacitance and timing in current memory cells. The cells considered are used in wireless applications such as cell phones, pagers, radios, handheld games, and GPS systems. In backend processes for memory cells, ILD (interlayer dielectric) materials and processes result in void formation during gap fill. This approach lowers the overall k-value of a given metal layer and is economically advantageous. The effect of the voids on the overall capacitive load is tremendous. In order to simulate the shape and positions of the voids and thus the overall capacitance, the topography simulator ELSA (Enhanced Level Set Applications) has been developed which consists of three modules, a level set module, a radiosity module, and a surface reaction module. The deposition process considered is deposition of silicon nitride. Test structures of interconnect lines of memory cells were fabricated and several SEM images thereof were used to validate the corresponding simulations.

  7. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means+.

    PubMed

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2017-04-01

    Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Multireference configuration interaction calculations of the first six ionization potentials of the uranium atom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bross, David H.; Parmar, Payal; Peterson, Kirk A.

    The first 6 ionization potentials (IPs) of the uranium atom have been calculated using multireference configuration interaction (MRCI+Q) with extrapolations to the complete basis set (CBS) limit using new all-electron correlation consistent basis sets. The latter were carried out with the third-order Douglas-Kroll-Hess Hamiltonian. Correlation down through the 5s5p5d electrons have been taken into account, as well as contributions to the IPs due to the Lamb shift. Spin-orbit coupling contributions calculated at the 4-component Kramers restricted configuration interaction level, as well as the Gaunt term computed at the Dirac-Hartree-Fock level, were added to the best scalar relativistic results. As amore » result, the final ionization potentials are expected to be accurate to at least 5 kcal/mol (0.2 eV), and thus more reliable than the current experimental values of IP 3 through IP 6.« less

  9. Sequence stratigraphy of the siliciclastic East Puolanka Group, the Palaeoproterozoic Kainuu Belt, Finland

    NASA Astrophysics Data System (ADS)

    Strand, Kari

    2005-04-01

    The 2300-2600 m thick Palaeoproterozoic East Puolanka Group within the central Fennoscandian Shield records four major transgressions on the cratonic margin within the approximate time period 2.25-2.10 Ga. Stacking of siliciclastic facies in parasequences and parasequence sets provides data to evaluate oscillation of relative sea-level and subsidence on different temporal scales. The lowermost part of the passive margin prism is characterized by alluvial plain to shallow marine sediments deposited in incised valleys. The succeeding highstand period is recorded by ca. 250 m of progradational parasequence sets of predominantly rippled and horizontally laminated sandstones, representing stacked wave-dominated shoreline units in sequence 1, capped by a hiatus or, in some places, by a subaerial lava. As relative sea-level rose again, sand-rich barrier-beach complexes developed with microtidal lagoons and inlets, corresponding to a retrogradational parasequence set. This was followed by a highstand period, with aggradation and progradation of alluvial plain and coastal sediments grading up into wave-tide influenced shoreline deposits in sequence 2. In sequence 3, the succeeding mudstones represent tidal flat deposits in a back-barrier region. With continued transgression, the parasequences stacked retrogradationally, each flooding episode being recorded by increasingly deeper water deposits above low-angle cross-bedded sandstones of the swash zones. The succeeding highstand progradation is represented by alluvial plain deposits. The next transgressive systems tract, overlying an inferred erosional ravinement surface, is recorded by a retrogradational parasequence set dominated by low-angle cross-stratified swash zone deposits in sequence 4. The large-scale trough cross-bed sets in these parasequences represent sand shoals and sheets of the inner shelf system. The overall major transgression recorded in the lowermost part of the Palaeoproterozoic cratonic margin succession was related to first- to second-order sea-level changes, probably due to increasing regional thermal subsidence of the lithosphere following partial continental breakup. The stratigraphic evolution can be related to changes of relative sea-level with a frequency of ca. 25 million years, probably propagated by episodic thermal subsidence. The parasequences identified here are related to high-frequency cycles of relative sea-level change due to low-magnitude eustatic oscillations.

  10. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  11. An AES chip with DPA resistance using hardware-based random order execution

    NASA Astrophysics Data System (ADS)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  12. Development of a three-dimensional high-order strand-grids approach

    NASA Astrophysics Data System (ADS)

    Tong, Oisin

    Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter J.

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  14. Relativistic Zeroth-Order Regular Approximation Combined with Nonhybrid and Hybrid Density Functional Theory: Performance for NMR Indirect Nuclear Spin-Spin Coupling in Heavy Metal Compounds.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-01-12

    A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.

  15. Earth Observing System Data Gateway

    NASA Technical Reports Server (NTRS)

    Pfister, Robin; McMahon, Joe; Amrhein, James; Sefert, Ed; Marsans, Lorena; Solomon, Mark; Nestler, Mark

    2006-01-01

    The Earth Observing System Data Gateway (EDG) software provides a "one-stop-shopping" standard interface for exploring and ordering Earth-science data stored at geographically distributed sites. EDG enables a user to do the following: 1) Search for data according to high-level criteria (e.g., geographic location, time, or satellite that acquired the data); 2) Browse the results of a search, viewing thumbnail sketches of data that satisfy the user s criteria; and 3) Order selected data for delivery to a specified address on a chosen medium (e.g., compact disk or magnetic tape). EDG consists of (1) a component that implements a high-level client/server protocol, and (2) a collection of C-language libraries that implement the passing of protocol messages between an EDG client and one or more EDG servers. EDG servers are located at sites usually called "Distributed Active Archive Centers" (DAACs). Each DAAC may allow access to many individual data items, called "granules" (e.g., single Landsat images). Related granules are grouped into collections called "data sets." EDG enables a user to send a search query to multiple DAACs simultaneously, inspect the resulting information, select browseable granules, and then order selected data from the different sites in a seamless fashion.

  16. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Bienvenido; Novo, Vicente

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditionsmore » when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.« less

  18. Total Top-Quark Pair-Production Cross Section at Hadron Colliders Through O(αS4)

    NASA Astrophysics Data System (ADS)

    Czakon, Michał; Fiedler, Paul; Mitov, Alexander

    2013-06-01

    We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg→tt¯+X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.

  19. Total top-quark pair-production cross section at hadron colliders through O(αS(4)).

    PubMed

    Czakon, Michał; Fiedler, Paul; Mitov, Alexander

    2013-06-21

    We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg → tt + X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.

  20. Empirical studies of design software: Implications for software engineering environments

    NASA Technical Reports Server (NTRS)

    Krasner, Herb

    1988-01-01

    The empirical studies team of MCC's Design Process Group conducted three studies in 1986-87 in order to gather data on professionals designing software systems in a range of situations. The first study (the Lift Experiment) used thinking aloud protocols in a controlled laboratory setting to study the cognitive processes of individual designers. The second study (the Object Server Project) involved the observation, videotaping, and data collection of a design team of a medium-sized development project over several months in order to study team dynamics. The third study (the Field Study) involved interviews with the personnel from 19 large development projects in the MCC shareholders in order to study how the process of design is affected by organizationl and project behavior. The focus of this report will be on key observations of design process (at several levels) and their implications for the design of environments.

  1. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  2. Summary of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. J.

    1984-01-01

    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.

  3. Desiderata for Healthcare Integrated Data Repositories Based on Architectural Comparison of Three Public Repositories

    PubMed Central

    Huser, Vojtech; Cimino, James J.

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366

  4. Desiderata for healthcare integrated data repositories based on architectural comparison of three public repositories.

    PubMed

    Huser, Vojtech; Cimino, James J

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.

  5. Web-based monitoring tools for Resistive Plate Chambers in the CMS experiment at CERN

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Ban, Y.; Cai, J.; Li, Q.; Liu, S.; Qian, S.; Wang, D.; Xu, Z.; Zhang, F.; Choi, Y.; Kim, D.; Goh, J.; Choi, S.; Hong, B.; Kang, J. W.; Kang, M.; Kwon, J. H.; Lee, K. S.; Lee, S. K.; Park, S. K.; Pant, L. M.; Mohanty, A. K.; Chudasama, R.; Singh, J. B.; Bhatnagar, V.; Mehta, A.; Kumar, R.; Cauwenbergh, S.; Costantini, S.; Cimmino, A.; Crucy, S.; Fagot, A.; Garcia, G.; Ocampo, A.; Poyraz, D.; Salva, S.; Thyssen, F.; Tytgat, M.; Zaganidis, N.; Doninck, W. V.; Cabrera, A.; Chaparro, L.; Gomez, J. P.; Gomez, B.; Sanabria, J. C.; Avila, C.; Ahmad, A.; Muhammad, S.; Shoaib, M.; Hoorani, H.; Awan, I.; Ali, I.; Ahmed, W.; Asghar, M. I.; Shahzad, H.; Sayed, A.; Ibrahim, A.; Aly, S.; Assran, Y.; Radi, A.; Elkafrawy, T.; Sharma, A.; Colafranceschi, S.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Iaselli, G.; Loddo, F.; Maggi, M.; Nuzzo, S.; Pugliese, G.; Radogna, R.; Venditti, R.; Verwilligen, P.; Benussi, L.; Bianco, S.; Piccolo, D.; Paolucci, P.; Buontempo, S.; Cavallo, N.; Merola, M.; Fabozzi, F.; Iorio, O. M.; Braghieri, A.; Montagna, P.; Riccardi, C.; Salvini, P.; Vitulo, P.; Vai, I.; Magnani, A.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Aleksandrov, A.; Genchev, V.; Iaydjiev, P.; Rodozov, M.; Sultanov, G.; Vutova, M.; Stoykova, S.; Hadjiiska, R.; Ibargüen, H. S.; Morales, M. I. P.; Bernardino, S. C.; Bagaturia, I.; Tsamalaidze, Z.; Crotty, I.

    2014-10-01

    The Resistive Plate Chambers (RPC) are used in the CMS experiment at the trigger level and also in the standard offline muon reconstruction. In order to guarantee the quality of the data collected and to monitor online the detector performance, a set of tools has been developed in CMS which is heavily used in the RPC system. The Web-based monitoring (WBM) is a set of java servlets that allows users to check the performance of the hardware during data taking, providing distributions and history plots of all the parameters. The functionalities of the RPC WBM monitoring tools are presented along with studies of the detector performance as a function of growing luminosity and environmental conditions that are tracked over time.

  6. 75 FR 57912 - Boulder Canyon Project-Rate Order No. WAPA-150

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-23

    ...-setting Formula and Approval of FY 2011 Base Charge and Rates. SUMMARY: The Deputy Secretary of Energy... existing Boulder Canyon Project (BCP) rate-setting formula and approving the base charge and rates for FY... financial and load data. The existing rate-setting formula is being extended under Rate Order No. WAPA-150...

  7. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  8. Similarity Rules for Scaling Solar Sail Systems

    NASA Technical Reports Server (NTRS)

    Canfield, Stephen L.; Beard, James W., III; Peddieson, John; Ewing, Anthony; Garbe, Greg

    2004-01-01

    Future science missions will require solar sails on the order 10,000 sq m (or larger). However, ground and flight demonstrations must be conducted at significantly smaller Sizes (400 sq m for ground demo) due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This report will address issues of scaling in solar sail systems, focusing on structural characteristics, by developing a set of similarity or similitude functions that will guide the scaling process. The primary goal of these similarity functions (process invariants) that collectively form a set of scaling rules or guidelines is to establish valid relationships between models and experiments that are performed at different orders of scale. In the near term, such an effort will help guide the size and properties of a flight validation sail that will need to be flown to accurately represent a large, mission-level sail.

  9. SH c realization of minimal model CFT: triality, poset and Burge condition

    NASA Astrophysics Data System (ADS)

    Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.

    2015-11-01

    Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.

  10. Is the current level of training in the use of equipment for prehospital radio communication sufficient? A cross-sectional study among prehospital physicians in Denmark

    PubMed Central

    2017-01-01

    Background Physicians working in prehospital care are expected to handle radio communication both within their own sector as well as with other divisions of the National Emergency Services. To date, no study has been conducted on the level of training received by physicians in the use of the equipment provided or on the level of competency acquired by physicians. Methods In order to investigate the self-assessed skill level acquired in the use of the TETRA (TErrestrial Trunked RAdio) authority radio for communication in a prehospital setting, a cross-sectional study was conducted by questionnaire circulated to all 454 physicians working in the Danish Emergency Medical Services. Results A lack of training was found among physicians working in prehospital care in Denmark in relation to the proper use of essential communication equipment. Prior to starting their first shift in a prehospital setting 38% of physicians reported having received no training in the use of the equipment, while 80% of physicians reported having received one1 hour of training or less. Among the majority of physicians their current level of training was sufficient for their everyday needs for prehospital communication but for 28% of physicians their current level of training was insufficient as they were unable to handle communication at this level. Conclusion As the first study in its field, this study investigated the training received in the use of essential communication equipment among physicians working in prehospital care in Denmark. The study found that competency does not appear to have been prioritised as highly as other technical skills needed to function in these settings. For the majority of physicians their current level of training was sufficient for everyday use but for a substantial minority further training is required, especially if the redundancy of the prehospital system is to be preserved. PMID:28667210

  11. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.

    PubMed

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  12. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    NASA Astrophysics Data System (ADS)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  13. The Impact of Non-Academic Involvement on Higher Order Thinking Skills

    ERIC Educational Resources Information Center

    Franklin, Megan Armbruster

    2014-01-01

    Although there is extensive literature on learning that occurs in academic settings on college campuses, data on whether students are engaging in higher order thinking skills in nonacademic settings are less prevalent. This study sought to understand whether students' higher order thinking skills (HOTs) are influenced by their involvement in…

  14. Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2018-05-01

    Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.

  15. Nuclear relaxation and vibrational contributions to the static electrical properties of polyatomic molecules: beyond the Hartree-Fock approximation

    NASA Astrophysics Data System (ADS)

    Luis, Josep M.; Martí, Josep; Duran, Miquel; Andrés, JoséL.

    1997-04-01

    Electronic and nuclear contributions to the static molecular electrical properties, along with the Stark tuning rate ( δνE ) and the infrared cross section changes ( δSE) have been calculated at the SCF level and at different correlated levels of theory, using a TZ2P basis set and finite field techniques. Nuclear contributions to these molecular properties have also been calculated using a recent analytical approach that allow both to check the accuracy of the finite field values, and to evaluate the importance of higher-order derivatives. The HF, CO, H 2O, H 2CO, and CH 4 molecules have been studied and the results compared to experimental date when available. The paper shows that nuclear relaxation and vibrational contributions must be included in order to obtain accurate values of the static electrical properties. Two different, combined approaches are proposed to predict experimental values of the electrical properties to an error smaller than 5%.

  16. Regular and Chaotic Quantum Dynamics of Two-Level Atoms in a Selfconsistent Radiation Field

    NASA Technical Reports Server (NTRS)

    Konkov, L. E.; Prants, S. V.

    1996-01-01

    Dynamics of two-level atoms interacting with their own radiation field in a single-mode high-quality resonator is considered. The dynamical system consists of two second-order differential equations, one for the atomic SU(2) dynamical-group parameter and another for the field strength. With the help of the maximal Lyapunov exponent for this set, we numerically investigate transitions from regularity to deterministic quantum chaos in such a simple model. Increasing the collective coupling constant b is identical with 8(pi)N(sub 0)(d(exp 2))/hw, we observed for initially unexcited atoms a usual sharp transition to chaos at b(sub c) approx. equal to 1. If we take the dimensionless individual Rabi frequency a = Omega/2w as a control parameter, then a sequence of order-to-chaos transitions has been observed starting with the critical value a(sub c) approx. equal to 0.25 at the same initial conditions.

  17. Effects of Second-Order Sum- and Difference-Frequency Wave Forces on the Motion Response of a Tension-Leg Platform Considering the Set-down Motion

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Tang, Yougang; Li, Yan; Cai, Runbo

    2018-04-01

    This paper presents a study on the motion response of a tension-leg platform (TLP) under first- and second-order wave forces, including the mean-drift force, difference and sum-frequency forces. The second-order wave force is calculated using the full-field quadratic transfer function (QTF). The coupled effect of the horizontal motions, such as surge, sway and yaw motions, and the set-down motion are taken into consideration by the nonlinear restoring matrix. The time-domain analysis with 50-yr random sea state is performed. A comparison of the results of different case studies is made to assess the influence of second-order wave force on the motions of the platform. The analysis shows that the second-order wave force has a major impact on motions of the TLP. The second-order difference-frequency wave force has an obvious influence on the low-frequency motions of surge and sway, and also will induce a large set-down motion which is an important part of heave motion. Besides, the second-order sum-frequency force will induce a set of high-frequency motions of roll and pitch. However, little influence of second-order wave force is found on the yaw motion.

  18. Geometry-dependent atomic multipole models for the water molecule.

    PubMed

    Loboda, O; Millot, C

    2017-10-28

    Models of atomic electric multipoles for the water molecule have been optimized in order to reproduce the electric potential around the molecule computed by ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set. Different models of increasing complexity, from atomic charges up to models containing atomic charges, dipoles, and quadrupoles, have been obtained. The geometry dependence of these atomic multipole models has been investigated by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For several models, the atomic multipole components have been fitted as a function of the geometry by a Taylor series of fourth order in monomer coordinate displacements.

  19. Mechanical system reliability for long life space systems

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1994-01-01

    The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.

  20. Geometry-dependent atomic multipole models for the water molecule

    NASA Astrophysics Data System (ADS)

    Loboda, O.; Millot, C.

    2017-10-01

    Models of atomic electric multipoles for the water molecule have been optimized in order to reproduce the electric potential around the molecule computed by ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set. Different models of increasing complexity, from atomic charges up to models containing atomic charges, dipoles, and quadrupoles, have been obtained. The geometry dependence of these atomic multipole models has been investigated by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For several models, the atomic multipole components have been fitted as a function of the geometry by a Taylor series of fourth order in monomer coordinate displacements.

  1. From Data to Images:. a Shape Based Approach for Fluorescence Tomography

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Prieto, K. E.

    2012-12-01

    Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.

  2. Applications of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling. Volume 7. Digital Computer Programs for the Analysis of Multiconductor Transmission Lines

    DTIC Science & Technology

    1977-07-01

    on an IBM 370/165 computer at The University of Kentucky using the Fortran IV, G level compiler and should be easily implemented on other computers...order as the columns of T. 3.5.3 Subroutines NROOT and EIGEN Subroutines NROOT and EIGEN are a set of subroutines from the IBM Scientific Subroutine...November 1975). [10] System/360 Scientific Subroutine Package, Version III, Fifth Edition (August 1970), IBM Corporation, Technical Publications

  3. Time-critical multirate scheduling using contemporary real-time operating system services

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1983-01-01

    Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.

  4. Solar potential scaling and the urban road network topology

    NASA Astrophysics Data System (ADS)

    Najem, Sara

    2017-01-01

    We explore the scaling of cities' solar potentials with their number of buildings and reveal a latent dependence between the solar potential and the length of the corresponding city's road network. This scaling is shown to be valid at the grid and block levels and is attributed to a common street length distribution. Additionally, we compute the buildings' solar potential correlation function and length in order to determine the set of critical exponents typifying the urban solar potential universality class.

  5. A sensitive and quantitative element-tagged immunoassay with ICPMS detection.

    PubMed

    Baranov, Vladimir I; Quinn, Zoë; Bandura, Dmitry R; Tanner, Scott D

    2002-04-01

    We report a set of novel immunoassays in which proteins of interest can be detected using specific element-tagged antibodies. These immunoassays are directly coupled with an inductively coupled plasma mass spectrometer (ICPMS) to quantify the elemental (in this work, metal) component of the reacted tagged antibodies. It is demonstrated that these methods can detect levels of target proteins as low as 0.1-0.5 ng/mL and yield a linear response to protein concentration over 3 orders of magnitude.

  6. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  7. Multi-Level, Multi Time-Scale Fluorescence Intermittency of Photosynthetic LH2 Complexes: A Precursor of Non-Photochemical Quenching?

    PubMed

    Schörner, Mario; Beyer, Sebastian Reinhardt; Southall, June; Cogdell, Richard J; Köhler, Jürgen

    2015-11-05

    The light harvesting complex LH2 is a chromoprotein that is an ideal system for studying protein dynamics via the spectral fluctuations of the emission of its intrinsic chromophores. We have immobilized these complexes in a polymer film and studied the fluctuations of the fluorescence intensity from individual complexes over 9 orders of magnitude in time. Combining time-tagged detection of single photons with a change-point analysis has allowed the unambigeous identification of the various intensity levels due to the huge statistical basis of the data set. We propose that the observed intensity level fluctuations reflect conformational changes of the protein backbone that might be a precursor of the mechanism from which nonphotochemical quenching of higher plants has evolved.

  8. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  9. Biomarkers of susceptibility following benzene exposure: influence of genetic polymorphisms on benzene metabolism and health effects.

    PubMed

    Carbonari, Damiano; Chiarella, Pieranna; Mansi, Antonella; Pigini, Daniela; Iavicoli, Sergio; Tranfo, Giovanna

    2016-01-01

    Benzene is a ubiquitous occupational and environmental pollutant. Improved industrial hygiene allowed airborne concentrations close to the environmental context (1-1000 µg/m(3)). Conversely, new limits for benzene levels in urban air were set (5 µg/m(3)). The biomonitoring of exposure to such low benzene concentrations are performed measuring specific and sensitive biomarkers such as S-phenylmercapturic acid, trans, trans-muconic acid and urinary benzene: many studies referred high variability in the levels of these biomarkers, suggesting the involvement of polymorphic metabolic genes in the individual susceptibility to benzene toxicity. We reviewed the influence of metabolic polymorphisms on the biomarkers levels of benzene exposure and effect, in order to understand the real impact of benzene exposure on subjects with increased susceptibility.

  10. What do you think of my picture? Investigating factors of influence in profile images context perception

    NASA Astrophysics Data System (ADS)

    Mazza, F.; Da Silva, M. P.; Le Callet, P.; Heynderickx, I. E. J.

    2015-03-01

    Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.

  11. Characterisation of hydrogeological connections in a lowland karst network using time series analysis of water levels in ephemeral groundwater-fed lakes (turloughs)

    NASA Astrophysics Data System (ADS)

    Gill, L. W.; Naughton, O.; Johnston, P. M.; Basu, B.; Ghosh, B.

    2013-08-01

    This research has used continuous water level measurements five groundwater-fed lakes (or turloughs) in a linked lowland karst network of south Galway in Ireland over a 3 year period in order to elucidate the hydrogeological controls and conduit configurations forming the flooded karstic hydraulic system beneath the ground. The main spring outflow from this network discharges below mean sea level making it difficult to determine the hydraulic nature of the network using traditional rainfall-spring flow cross analysis, as has been done in many other studies on karst systems. However, the localised groundwater-surface water interactions (the turloughs) in this flooded lowland karst system can yield information about the nature of the hydraulic connections beneath the ground. Various different analytical techniques have been applied to the fluctuating turlough water level time series data in order to determine the nature of the linkage between them as well as hydraulic pipe configurations at key points in order to improve the conceptual model of the overall karst network. Initially, simple cross correlations between the different turlough water levels were carried out applying different time lags. Frequency analysis of the signals was then carried out using Fast Fourier transform analysis and then both discrete and continuous wavelet analyses have been applied to the data sets to characterise these inherently non-stationary time-series of fluctuating water levels. The analysis has indicated which turloughs are on the main line conduit system and which are somewhat off-line, the relative size of the main conduit in the network including evidence of localised constrictions, as well as clearly showing the tidal influence on the water levels in the three lower turloughs at shallow depths ∼8 km from the main spring outfall at the sea. It has also indicated that the timing of high rainfall events coincident with maximum spring tide levels may promote more consistent, long duration flooding of the turloughs throughout the winter.

  12. Determination of methyl mercury in dental-unit wastewater.

    PubMed

    Stone, Mark E; Cohen, Mark E; Liang, Lian; Pang, Patrick

    2003-11-01

    The objective of this investigation was to establish whether monomethyl mercury (MMHg) is present in dental-unit wastewater and if present, to determine the concentration relative to total mercury. Wastewater samples were collected over an 18-month period from three locations: at the dental chair; at a 30-chair clinic, and at a 107-chair clinic. Total mercury determinations were completed using United States Environmental Protection Agency's (USEPA) method 1631. MMHg was measured utilizing modified USEPA method 1630. The total mercury levels were found to be: 45182.11 microg/l (n=13, SD=68562.42) for the chair-side samples, 5350.74 microg/l (n=12, SD=2672.94) for samples at the 30-chair clinic, and 13439.13 microg/l (n=13, SD=9898.91) for samples at the107-chair clinic. Monomethyl Hg levels averaged 0.90 microg/l (n=13, SD=0.87) for chair side samples, 8.26 (n=12, SD=7.74) for the 30-chair facility, and 26.77 microg/l (n=13, SD=34.50) for 107-chair facility. By way of comparison, the MMHg levels for the open ocean, lakes and rain are orders of magnitude lower than methyl mercury levels seen in dental wastewater (part per billion levels for dental wastewater samples compared to part per trillion levels for samples from the environment). Environmentally important levels of MMHg were found to be present in dental-unit wastewater at concentrations orders of magnitude higher than seen in natural settings.

  13. Sea-Level Allowances along the World Coastlines

    NASA Astrophysics Data System (ADS)

    Vandewal, R.; Tsitsikas, C.; Reerink, T.; Slangen, A.; de Winter, R.; Muis, S.; Hunter, J. R.

    2017-12-01

    Sea level changes as a result of climate change. For projections we take ocean mass changes and volume changes into account. Including gravitational and rotational fingerprints this provide regional sea level changes. Hence we can calculate sea-level rise patterns based on CMIP5 projections. In order to take the variability around the mean state, which follows from the climate models, into account we use the concept of allowances. The allowance indicates the height a coastal structure needs to be increased to maintain the likelihood of sea-level extremes. Here we use a global reanalysis of storm surges and extreme sea levels based on a global hydrodynamic model in order to calculate allowances. It is shown that the model compares in most regions favourably with tide gauge records from the GESLA data set. Combining the CMIP5 projections and the global hydrodynamical model we calculate sea-level allowances along the global coastlines and expand the number of points with a factor 50 relative to tide gauge based results. Results show that allowances increase gradually along continental margins with largest values near the equator. In general values are lower at midlatitudes both in Northern and Southern Hemisphere. Increased risk for extremes are typically 103-104 for the majority of the coastline under the RCP8.5 scenario at the end of the century. Finally we will show preliminary results of the effect of changing wave heights based on the coordinated ocean wave project.

  14. Recognition of Risk Information - Adaptation of J. Bertin's Orderable Matrix for social communication

    NASA Astrophysics Data System (ADS)

    Ishida, Keiichi

    2018-05-01

    This paper aims to show capability of the Orderable Matrix of Jacques Bertin which is a visualization method of data analyze and/or a method to recognize data. That matrix can show the data by replacing numbers to visual element. As an example, using a set of data regarding natural hazard rankings for certain metropolitan cities in the world, this paper describes how the Orderable Matrix handles the data set and show characteristic factors of this data to understand it. Not only to see a kind of risk ranking of cities, the Orderable Matrix shows how differently danger concerned cities ones and others are. Furthermore, we will see that the visualized data by Orderable Matrix allows us to see the characteristics of the data set comprehensively and instantaneously.

  15. Decreased rates of hypoglycemia following implementation of a comprehensive computerized insulin order set and titration algorithm in the inpatient setting.

    PubMed

    Sinha Gregory, Naina; Seley, Jane Jeffrie; Gerber, Linda M; Tang, Chin; Brillon, David

    2016-12-01

    More than one-third of hospitalized patients have hyperglycemia. Despite evidence that improving glycemic control leads to better outcomes, achieving recognized targets remains a challenge. The objective of this study was to evaluate the implementation of a computerized insulin order set and titration algorithm on rates of hypoglycemia and overall inpatient glycemic control. A prospective observational study evaluating the impact of a glycemic order set and titration algorithm in an academic medical center in non-critical care medical and surgical inpatients. The initial intervention was hospital-wide implementation of a comprehensive insulin order set. The secondary intervention was initiation of an insulin titration algorithm in two pilot medicine inpatient units. Point of care testing blood glucose reports were analyzed. These reports included rates of hypoglycemia (BG < 70 mg/dL) and hyperglycemia (BG >200 mg/dL in phase 1, BG > 180 mg/dL in phase 2). In the first phase of the study, implementation of the insulin order set was associated with decreased rates of hypoglycemia (1.92% vs 1.61%; p < 0.001) and increased rates of hyperglycemia (24.02% vs 27.27%; p < 0.001) from 2010 to 2011. In the second phase, addition of a titration algorithm was associated with decreased rates of hypoglycemia (2.57% vs 1.82%; p = 0.039) and increased rates of hyperglycemia (31.76% vs 41.33%; p < 0.001) from 2012 to 2013. A comprehensive computerized insulin order set and titration algorithm significantly decreased rates of hypoglycemia. This significant reduction in hypoglycemia was associated with increased rates of hyperglycemia. Hardwiring the algorithm into the electronic medical record may foster adoption.

  16. [Dynamic changes of inflammation-related indices in venous thromboembolism and the association between these indices and venous thromboembolism].

    PubMed

    Liu, Fang-fang; Zhai, Zhen-guo; Yang, Yuan-hua; Wang, Jun; Wang, Chen

    2013-06-25

    To evaluate the dynamic changes of inflammation-related indices in blood during the development of venous thromboembolism (VTE) and the association between these indices and VTE. A total of 95 VTE hospitalized patients(41 males,54 females) were recruited from Department of Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital from January 2010 to December 2010. Comparisons of inflammation-related indices including white blood cell (WBC), neutrophil (NE), fibrinogen (FBG), C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) were conducted between VTE patients and normal ranges. And the dynamic changes of these indices during the development of VTE were evaluated. Then they were divided into subgroups according to disease stage, gender, age, VTE type, body mass index, smoking status and clinical manifestations. And statistical analyses were performed to elucidate the associations between these indices and VTE. The levels of NE and CRP in VTE patients (0.72, 15.0 mg/L) and ESR in male VTE patients (20.0 mm/1 h) were elevated compared with normal ranges; while WBC (male 7.27×10(9)/L, female 8.67×10(9)/L), FBG (male 3621 mg/L, female 3201 mg/L) and female ESR (19.5 mm/1 h) in VTE patients were within the normal ranges. The level of CRP was higher in acute (mean rank order value: 49.72) and sub-acute (mean rank order value: 44.80) VTE patients than chronic VTE patients (mean rank order value: 30.25). The level of FBG, CRP and ESR in patients ≥ 50 years old increased versus those <50 years old (mean rank order values 48.83 vs 34.53, 44.32 vs 28.90 and 45.95 vs 27.84 respectively), the patients whose body mass index (BMI) <25 kg/m(2) had higher WBC level than those whose BMI ≥ 25 kg/m(2) (mean rank order values 52.96 vs 36.46); smoking VTE patients had elevated FBG and CRP levels than non-smoking VTE patients (mean rank order values 57.75 vs 42.69 and 53.92 vs 37.75 respectively); compared with those without clinical manifestations of periphery pulmonary artery involved, the patients with clinical manifestations had higher levels of FBG, CRP and ESR (mean rank order values 59.24 vs 37.39, 52.68 vs 33.19 and 50.08 vs 36.55 respectively). The above differences had statistical significance (all P < 0.05). Some inflammation-related indices frequently used in clinical settings become elevated in VTE patients. Part of these indices show higher levels in VTE acute and sub-acute stages, and in older, non-obese, smoking and periphery pulmonary artery involved VTE patients.

  17. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band

    PubMed Central

    Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis. PMID:29138687

  18. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band.

    PubMed

    Burhan, Nuradebah; Kasno, Mohammad 'Afif; Ghazali, Rozaimi; Said, Md Radzai; Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis.

  19. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    PubMed

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Quality of Care and Job Satisfaction in the European Home Care Setting: Research Protocol

    PubMed Central

    van der Roest, Henriëtte; van Hout, Hein; Declercq, Anja

    2016-01-01

    Introduction: Since the European population is ageing, a growing number of elderly will need home care. Consequently, high quality home care for the elderly remains an important challenge. Job satisfaction among care professionals is regarded as an important aspect of the quality of home care. Aim: This paper describes a research protocol to identify elements that have an impact on job satisfaction among care professionals and on quality of care for older people in the home care setting of six European countries. Methods: Data on elements at the macro-level (policy), meso-level (care organisations) and micro-level (clients) are of importance in determining job satisfaction and quality of care. Macro-level indicators will be identified in a previously published literature review. At meso- and micro-level, data will be collected by means of two questionnaires utilsed with both care organisations and care professionals, and by means of interRAI Home Care assessments of clients. The client assessments will be used to calculate quality of care indicators. Subsequently, data will be analysed by means of linear and stepwise multiple regression analyses, correlations and multilevel techniques. Conclusions and Discussion: These results can guide health care policy makers in their decision making process in order to increase the quality of home care in their organisation, in their country or in Europe. PMID:28435423

  1. Online Sea Ice Knowledge and Data Platform: www.seaiceportal.de

    NASA Astrophysics Data System (ADS)

    Treffeisen, R. E.; Nicolaus, M.; Bartsch, A.; Fritzsch, B.; Grosfeld, K.; Haas, C.; Hendricks, S.; Heygster, G.; Hiller, W.; Krumpen, T.; Melsheimer, C.; Nicolaus, A.; Ricker, R.; Weigelt, M.

    2016-12-01

    There is an increasing public interest in sea ice information from both Polar Regions, which requires up-to-date background information and data sets at different levels for various target groups. In order to serve this interest and need, seaiceportal.de (originally: meereisportal.de) was developed as a comprehensive German knowledge platform on sea ice and its snow cover in the Arctic and Antarctic. It was launched in April 2013. Since then, the content and selection of data sets increased and the data portal received increasing attention, also from the international science community. Meanwhile, we are providing near-real time and archived data of many key parameters of sea ice and its snow cover. The data sets result from measurements acquired by various platforms as well as numerical simulations. Satellite observations (e.g., AMSR2, CryoSat-2 and SMOS) of sea ice concentration, freeboard, thickness and drift are available as gridded data sets. Sea ice and snow temperatures and thickness as well as atmospheric parameters are available from autonomous ice-tethered platforms (buoys). Additional ship observations, ice station measurements, and mooring time series are compiled as data collections over the last decade. In parallel, we are continuously extending our meta-data and uncertainty information for all data sets. In addition to the data portal, seaiceportal.de provides general comprehensive background information on sea ice and snow as well as expert statements on recent observations and developments. This content is mostly in German in order to complement the various existing international sites for the German speaking public. We will present the portal, its content and function, but we are also asking for direct user feedback and are open for potential new partners.

  2. Evaluation of the spatial and temporal measurement requirements of remote sensors for monitoring regional air pollution episodes

    NASA Technical Reports Server (NTRS)

    Burke, H. H. K.; Bowley, C. J.; Barnes, J. C.

    1979-01-01

    The spatial and temporal measurement requirements of satellite sensors for monitoring regional air pollution episodes were evaluated. Use was made of two sets of data from the Sulfate Regional Experiment (SURE), which provided the first ground-based aerosol measurements from a regional-scale station network. The sulfate data were analyzed for two air pollution episode cases. The results of the analysis indicate that the key considerations required for episode mapping from satellite sensors are the following: (1) detection of sulfate levels exceeding 20 micron-g/cu m; (2) capability to view a broad area (of the order of 1500 km swath) because of regional extent of pollution episodes; (3) spatial resolution sufficient to detect variations in sulfate levels of greater than 10 micron-g/cu m over distances of the order of 50 to 75 km; (4) repeat coverage at least on a daily basis; and (5) satellite observations during the mid to late morning local time, when the sulfate levels have begun to increase after the early morning minimum levels, and convective-type cloud cover has not yet increased to the amount reached later in the afternoon. Analysis of the satellite imagery shows that convective clouds can obscure haze patterns. Additional parameters based on spectral analysis include wavelength and bandwidth requirements.

  3. Comparing the efficiency of digital and conventional soil mapping to predict soil types in a semi-arid region in Iran

    NASA Astrophysics Data System (ADS)

    Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter

    2017-05-01

    The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.

  4. Koopmans' analysis of chemical hardness with spectral-like resolution.

    PubMed

    Putz, Mihai V

    2013-01-01

    Three approximation levels of Koopmans' theorem are explored and applied: the first referring to the inner quantum behavior of the orbitalic energies that depart from the genuine ones in Fock space when the wave-functions' Hilbert-Banach basis set is specified to solve the many-electronic spectra of spin-orbitals' eigenstates; it is the most subtle issue regarding Koopmans' theorem as it brings many critics and refutation in the last decades, yet it is shown here as an irrefutable "observational" effect through computation, specific to any in silico spectra of an eigenproblem; the second level assumes the "frozen spin-orbitals" approximation during the extracting or adding of electrons to the frontier of the chemical system through the ionization and affinity processes, respectively; this approximation is nevertheless workable for great deal of chemical compounds, especially organic systems, and is justified for chemical reactivity and aromaticity hierarchies in an homologue series; the third and the most severe approximation regards the extension of the second one to superior orders of ionization and affinities, here studied at the level of chemical hardness compact-finite expressions up to spectral-like resolution for a paradigmatic set of aromatic carbohydrates.

  5. Koopmans' Analysis of Chemical Hardness with Spectral-Like Resolution

    PubMed Central

    2013-01-01

    Three approximation levels of Koopmans' theorem are explored and applied: the first referring to the inner quantum behavior of the orbitalic energies that depart from the genuine ones in Fock space when the wave-functions' Hilbert-Banach basis set is specified to solve the many-electronic spectra of spin-orbitals' eigenstates; it is the most subtle issue regarding Koopmans' theorem as it brings many critics and refutation in the last decades, yet it is shown here as an irrefutable “observational” effect through computation, specific to any in silico spectra of an eigenproblem; the second level assumes the “frozen spin-orbitals” approximation during the extracting or adding of electrons to the frontier of the chemical system through the ionization and affinity processes, respectively; this approximation is nevertheless workable for great deal of chemical compounds, especially organic systems, and is justified for chemical reactivity and aromaticity hierarchies in an homologue series; the third and the most severe approximation regards the extension of the second one to superior orders of ionization and affinities, here studied at the level of chemical hardness compact-finite expressions up to spectral-like resolution for a paradigmatic set of aromatic carbohydrates. PMID:23970834

  6. Coupled-cluster and explicitly correlated perturbation-theory calculations of the uracil anion.

    PubMed

    Bachorz, Rafał A; Klopper, Wim; Gutowski, Maciej

    2007-02-28

    A valence-type anion of the canonical tautomer of uracil has been characterized using explicitly correlated second-order Moller-Plesset perturbation theory (RI-MP2-R12) in conjunction with conventional coupled-cluster theory with single, double, and perturbative triple excitations. At this level of electron-correlation treatment and after inclusion of a zero-point vibrational energy correction, determined in the harmonic approximation at the RI-MP2 level of theory, the valence anion is adiabatically stable with respect to the neutral molecule by 40 meV. The anion is characterized by a vertical detachment energy of 0.60 eV. To obtain accurate estimates of the vertical and adiabatic electron binding energies, a scheme was applied in which electronic energy contributions from various levels of theory were added, each of them extrapolated to the corresponding basis-set limit. The MP2 basis-set limits were also evaluated using an explicitly correlated approach, and the results of these calculations are in agreement with the extrapolated values. A remarkable feature of the valence anionic state is that the adiabatic electron binding energy is positive but smaller than the adiabatic electron binding energy of the dipole-bound state.

  7. Integration at the round table: marine spatial planning in multi-stakeholder settings.

    PubMed

    Olsen, Erik; Fluharty, David; Hoel, Alf Håkon; Hostens, Kristian; Maes, Frank; Pecceu, Ellen

    2014-01-01

    Marine spatial planning (MSP) is often considered as a pragmatic approach to implement an ecosystem based management in order to manage marine space in a sustainable way. This requires the involvement of multiple actors and stakeholders at various governmental and societal levels. Several factors affect how well the integrated management of marine waters will be achieved, such as different governance settings (division of power between central and local governments), economic activities (and related priorities), external drivers, spatial scales, incentives and objectives, varying approaches to legislation and political will. We compared MSP in Belgium, Norway and the US to illustrate how the integration of stakeholders and governmental levels differs among these countries along the factors mentioned above. Horizontal integration (between sectors) is successful in all three countries, achieved through the use of neutral 'round-table' meeting places for all actors. Vertical integration between government levels varies, with Belgium and Norway having achieved full integration while the US lacks integration of the legislature due to sharp disagreements among stakeholders and unsuccessful partisan leadership. Success factors include political will and leadership, process transparency and stakeholder participation, and should be considered in all MSP development processes.

  8. Predicting passenger seat comfort and discomfort on the basis of human, context and seat characteristics: a literature review.

    PubMed

    Hiemstra-van Mastrigt, Suzanne; Groenesteijn, Liesbeth; Vink, Peter; Kuijt-Evers, Lottie F M

    2017-07-01

    This literature review focused on passenger seat comfort and discomfort in a human-product-context interaction. The relationships between anthropometric variables (human level), activities (context level), seat characteristics (product level) and the perception of comfort and discomfort were studied through mediating variables, such as body posture, movement and interface pressure. It is concluded that there are correlations between anthropometric variables and interface pressure variables, and that this relationship is affected by body posture. The results of studies on the correlation between pressure variables and passenger comfort and discomfort are not in line with each other. Only associations were found between the other variables (e.g. activities and seat characteristics). A conceptual model illustrates the results of the review, but relationships could not be quantified due to a lack of statistical evidence and large differences in research set-ups between the reviewed papers. Practitioner Summary: This literature review set out to quantify the relationships between human, context and seat characteristics, and comfort and discomfort experience of passenger seats, in order to build a predictive model that can support seat designers and purchasers to make informed decisions. However, statistical evidence is lacking from existing literature.

  9. Integration at the Round Table: Marine Spatial Planning in Multi-Stakeholder Settings

    PubMed Central

    Olsen, Erik; Fluharty, David; Hoel, Alf Håkon; Hostens, Kristian; Maes, Frank; Pecceu, Ellen

    2014-01-01

    Marine spatial planning (MSP) is often considered as a pragmatic approach to implement an ecosystem based management in order to manage marine space in a sustainable way. This requires the involvement of multiple actors and stakeholders at various governmental and societal levels. Several factors affect how well the integrated management of marine waters will be achieved, such as different governance settings (division of power between central and local governments), economic activities (and related priorities), external drivers, spatial scales, incentives and objectives, varying approaches to legislation and political will. We compared MSP in Belgium, Norway and the US to illustrate how the integration of stakeholders and governmental levels differs among these countries along the factors mentioned above. Horizontal integration (between sectors) is successful in all three countries, achieved through the use of neutral ‘round-table’ meeting places for all actors. Vertical integration between government levels varies, with Belgium and Norway having achieved full integration while the US lacks integration of the legislature due to sharp disagreements among stakeholders and unsuccessful partisan leadership. Success factors include political will and leadership, process transparency and stakeholder participation, and should be considered in all MSP development processes. PMID:25299595

  10. The OPGT/MJS plasma wave science team

    NASA Technical Reports Server (NTRS)

    Scarf, F. L.

    1972-01-01

    Some properties of a model magnetosphere for Saturn were studied in order to determine the bounds that can be set on surface field strength and trapped particle population. The primary observational constraint was that nonthermal radiation similar to the Jovian radio emissions must be undetectable from Earth. It is argued that for a Saturn surface field of approximately one gauss, particles that are energized as they diffuse in from the magnetopause with conservation of magnetic moment will produce synchrotron radiation levels that are undetectable at a range of 9.5 AU. The plasma instabilities that heat the oncoming wind particles at the bow shock and others that can limit the stably-trapped flux levels are also discussed.

  11. Determination of lead, uranium, thorium, and thallium in silicate glass standard materials by isotope dilution mass spectrometry

    USGS Publications Warehouse

    Barnes, I.L.; Garner, E.L.; Gramlich, J.W.; Moore, L.J.; Murphy, T.J.; Machlan, L.A.; Shields, W.R.; Tatsumoto, M.; Knight, R.J.

    1973-01-01

    A set of four standard glasses has been prepared which have been doped with 61 different elements at the 500-, 50-, 1-, and 0.02-ppm level. The concentrations of lead, uranium, thorium, and thallium have been determined by isotope dilution mass spectrometry at a number of points in each of the glasses. The results obtained from independent determinations in two laboratories demonstrate the homogeneity of the samples and that precision of the order of 0.5% (95% L.E.) may be obtained by the method even at the 20-ppb level for these elements. The chemical and mass spectrometric procedures necessary are presented.

  12. Corrigenda of 'explicit wave-averaged primitive equations using a generalized Lagrangian Mean'

    NASA Astrophysics Data System (ADS)

    Ardhuin, F.; Rascle, N.; Belibassakis, K. A.

    2017-05-01

    Ardhuin et al. (2008) gave a second-order approximation in the wave slope of the exact Generalized Lagrangian Mean (GLM) equations derived by Andrews and McIntyre (1978), and also performed a coordinate transformation, going from GLM to a 'GLMz' set of equations. That latter step removed the wandering of the GLM mean sea level away from the Eulerian-mean sea level, making the GLMz flow non-divergent. That step contained some inaccuarate statements about the coordinate transformation, while the rest of the paper contained an error on the surface dynamic boundary condition for viscous stresses. I am thankful to Mathias Delpey and Hidenori Aiki for pointing out these errors, which are corrected below.

  13. Yucca Mountain, Nevada - A proposed geologic repository for high-level radioactive waste

    USGS Publications Warehouse

    Levich, R.A.; Stuckless, J.S.

    2006-01-01

    Yucca Mountain in Nevada represents the proposed solution to what has been a lengthy national effort to dispose of high-level radioactive waste, waste which must be isolated from the biosphere for tens of thousands of years. This chapter reviews the background of that national effort and includes some discussion of international work in order to provide a more complete framework for the problem of waste disposal. Other chapters provide the regional geologic setting, the geology of the Yucca Mountain site, the tectonics, and climate (past, present, and future). These last two chapters are integral to prediction of long-term waste isolation. ?? 2007 Geological Society of America. All rights reserved.

  14. The relationship between body mass and field metabolic rate among individual birds and mammals.

    PubMed

    Hudson, Lawrence N; Isaac, Nick J B; Reuman, Daniel C

    2013-09-01

    1. The power-law dependence of metabolic rate on body mass has major implications at every level of ecological organization. However, the overwhelming majority of studies examining this relationship have used basal or resting metabolic rates, and/or have used data consisting of species-averaged masses and metabolic rates. Field metabolic rates are more ecologically relevant and are probably more directly subject to natural selection than basal rates. Individual rates might be more important than species-average rates in determining the outcome of ecological interactions, and hence selection. 2. We here provide the first comprehensive database of published field metabolic rates and body masses of individual birds and mammals, containing measurements of 1498 animals of 133 species in 28 orders. We used linear mixed-effects models to answer questions about the body mass scaling of metabolic rate and its taxonomic universality/heterogeneity that have become classic areas of controversy. Our statistical approach allows mean scaling exponents and taxonomic heterogeneity in scaling to be analysed in a unified way while simultaneously accounting for nonindependence in the data due to shared evolutionary history of related species. 3. The mean power-law scaling exponents of metabolic rate vs. body mass relationships were 0.71 [95% confidence intervals (CI) 0.625-0.795] for birds and 0.64 (95% CI 0.564-0.716) for mammals. However, these central tendencies obscured meaningful taxonomic heterogeneity in scaling exponents. The primary taxonomic level at which heterogeneity occurred was the order level. Substantial heterogeneity also occurred at the species level, a fact that cannot be revealed by species-averaged data sets used in prior work. Variability in scaling exponents at both order and species levels was comparable to or exceeded the differences 3/4-2/3 = 1/12 and 0.71-0.64. 4. Results are interpreted in the light of a variety of existing theories. In particular, results are consistent with the heat dissipation theory of Speakman & Król (2010) and provided some support for the metabolic levels boundary hypothesis of Glazier (2010). 5. Our analysis provides the first comprehensive empirical analysis of the scaling relationship between field metabolic rate and body mass in individual birds and mammals. Our data set is a valuable contribution to those interested in theories of the allometry of metabolic rates. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  15. Does Admission to the ICU Prevent African American Disparities in Withdrawal of Life-Sustaining Treatment?

    PubMed

    Chertoff, Jason; Olson, Angela; Alnuaimat, Hassan

    2017-10-01

    We sought to determine whether black patients admitted to an ICU were less likely than white patients to withdraw life-sustaining treatments. We performed a retrospective cohort study of hospital discharges from October 20, 2015, to October 19, 2016, for inpatients 18 years old or older and recorded those patients, along with their respective races, who had an "Adult Comfort Care" order set placed prior to discharge. A two-sample test for equality of two proportions with continuity correction was performed to compare the proportions between blacks and whites. University of Florida Health. The study cohort included 29,590 inpatient discharges, with 21,212 Caucasians (71.69%), 5,825 African Americans (19.69%), and 2,546 non-Caucasians/non-African Americans (8.62%). Withdrawal of life-sustaining treatments. Of the total discharges (n = 29,590), 525 (1.77%) had the Adult Comfort Care order set placed. Seventy-eight of 5,825 African American patients (1.34%) had the Adult Comfort Care order set placed, whereas 413 of 21,212 Caucasian patients (1.95%) had this order set placed (p = 0.00251; 95% CI, 0.00248-0.00968). Of the 29,590 patients evaluated, 6,324 patients (21.37%) spent at least one night in an ICU. Of these 6,324 patients, 4,821 (76.24%) were white and 1,056 (16.70%) were black. Three hundred fifty of 6,324 (5.53%) were discharged with an Adult Comfort Care order set. Two hundred seventy-one White patients (5.62%) with one night in an ICU were discharged with an Adult Comfort Care order set, whereas 54 Black patients (5.11%) with one night in an ICU had the order set (p = 0.516). This study suggests that Black patients may be less likely to withdraw life-supportive measures than whites, but that this disparity may be absent in patients who spend time in the ICU during their hospitalization.

  16. Physical order produces healthy choices, generosity, and conventionality, whereas disorder produces creativity.

    PubMed

    Vohs, Kathleen D; Redden, Joseph P; Rahinel, Ryan

    2013-09-01

    Order and disorder are prevalent in both nature and culture, which suggests that each environ confers advantages for different outcomes. Three experiments tested the novel hypotheses that orderly environments lead people toward tradition and convention, whereas disorderly environments encourage breaking with tradition and convention-and that both settings can alter preferences, choice, and behavior. Experiment 1 showed that relative to participants in a disorderly room, participants in an orderly room chose healthier snacks and donated more money. Experiment 2 showed that participants in a disorderly room were more creative than participants in an orderly room. Experiment 3 showed a predicted crossover effect: Participants in an orderly room preferred an option labeled as classic, but those in a disorderly room preferred an option labeled as new. Whereas prior research on physical settings has shown that orderly settings encourage better behavior than disorderly ones, the current research tells a nuanced story of how different environments suit different outcomes.

  17. A Galerkin formulation of the MIB method for three dimensional elliptic interface problems

    PubMed Central

    Xia, Kelin; Wei, Guo-Wei

    2014-01-01

    We develop a three dimensional (3D) Galerkin formulation of the matched interface and boundary (MIB) method for solving elliptic partial differential equations (PDEs) with discontinuous coefficients, i.e., the elliptic interface problem. The present approach builds up two sets of elements respectively on two extended subdomains which both include the interface. As a result, two sets of elements overlap each other near the interface. Fictitious solutions are defined on the overlapping part of the elements, so that the differentiation operations of the original PDEs can be discretized as if there was no interface. The extra coefficients of polynomial basis functions, which furnish the overlapping elements and solve the fictitious solutions, are determined by interface jump conditions. Consequently, the interface jump conditions are rigorously enforced on the interface. The present method utilizes Cartesian meshes to avoid the mesh generation in conventional finite element methods (FEMs). We implement the proposed MIB Galerkin method with three different elements, namely, rectangular prism element, five-tetrahedron element and six-tetrahedron element, which tile the Cartesian mesh without introducing any new node. The accuracy, stability and robustness of the proposed 3D MIB Galerkin are extensively validated over three types of elliptic interface problems. In the first type, interfaces are analytically defined by level set functions. These interfaces are relatively simple but admit geometric singularities. In the second type, interfaces are defined by protein surfaces, which are truly arbitrarily complex. The last type of interfaces originates from multiprotein complexes, such as molecular motors. Near second order accuracy has been confirmed for all of these problems. To our knowledge, it is the first time for an FEM to show a near second order convergence in solving the Poisson equation with realistic protein surfaces. Additionally, the present work offers the first known near second order accurate method for C1 continuous or H2 continuous solutions associated with a Lipschitz continuous interface in a 3D setting. PMID:25309038

  18. Analysis on Flexural Strength of A36 Mild Steel by Design of Experiment (DOE)

    NASA Astrophysics Data System (ADS)

    Nurulhuda, A.; Hafizzal, Y.; Izzuddin, MZM; Sulawati, MRN; Rafidah, A.; Suhaila, Y.; Fauziah, AR

    2017-08-01

    Nowadays demand for high quality and reliable components and materials are increasing so flexural tests have become vital test method in both the research and manufacturing process and development to explain in details about the material’s ability to withstand deformation under load. Recently, there are lack research studies on the effect of thickness, welding type and joint design on the flexural condition by DOE approach method. Therefore, this research will come out with the flexural strength of mild steel since it is not well documented. By using Design of Experiment (DOE), a full factorial design with two replications has been used to study the effects of important parameters which are welding type, thickness and joint design. The measurement of output response is identified as flexural strength value. Randomize experiments was conducted based on table generated via Minitab software. A normal probability test was carried out using Anderson Darling Test and show that the P-value is <0.005. Thus, the data is not normal since there is significance different between the actual data with the ideal data. Referring to the ANOVA, only factor joint design is significant since the P-value is less than 0.05. From the main plot and interaction plot, the recommended setting for each of parameters were suggested as high level for welding type, high level for thickness and low level for joint design. The prediction model was developed thru regression in order to measure effect of output response for any changes on parameters setting. In the future, the experiments can be enhanced using Taguchi methods in order to do verification of result.

  19. General Approach for Rock Classification Based on Digital Image Analysis of Electrical Borehole Wall Images

    NASA Astrophysics Data System (ADS)

    Linek, M.; Jungmann, M.; Berlage, T.; Clauser, C.

    2005-12-01

    Within the Ocean Drilling Program (ODP), image logging tools have been routinely deployed such as the Formation MicroScanner (FMS) or the Resistivity-At-Bit (RAB) tools. Both logging methods are based on resistivity measurements at the borehole wall and therefore are sensitive to conductivity contrasts, which are mapped in color scale images. These images are commonly used to study the structure of the sedimentary rocks and the oceanic crust (petrologic fabric, fractures, veins, etc.). So far, mapping of lithology from electrical images is purely based on visual inspection and subjective interpretation. We apply digital image analysis on electrical borehole wall images in order to develop a method, which augments objective rock identification. We focus on supervised textural pattern recognition which studies the spatial gray level distribution with respect to certain rock types. FMS image intervals of rock classes known from core data are taken in order to train textural characteristics for each class. A so-called gray level co-occurrence matrix is computed by counting the occurrence of a pair of gray levels that are a certain distant apart. Once the matrix for an image interval is computed, we calculate the image contrast, homogeneity, energy, and entropy. We assign characteristic textural features to different rock types by reducing the image information into a small set of descriptive features. Once a discriminating set of texture features for each rock type is found, we are able to discriminate the entire FMS images regarding the trained rock type classification. A rock classification based on texture features enables quantitative lithology mapping and is characterized by a high repeatability, in contrast to a purely visual subjective image interpretation. We show examples for the rock classification between breccias, pillows, massive units, and horizontally bedded tuffs based on ODP image data.

  20. The Ising model coupled to 2d orders

    NASA Astrophysics Data System (ADS)

    Glaser, Lisa

    2018-04-01

    In this article we make first steps in coupling matter to causal set theory in the path integral. We explore the case of the Ising model coupled to the 2d discrete Einstein Hilbert action, restricted to the 2d orders. We probe the phase diagram in terms of the Wick rotation parameter β and the Ising coupling j and find that the matter and the causal sets together give rise to an interesting phase structure. The couplings give rise to five different phases. The causal sets take on random or crystalline characteristics as described in Surya (2012 Class. Quantum Grav. 29 132001) and the Ising model can be correlated or uncorrelated on the random orders and correlated, uncorrelated or anti-correlated on the crystalline orders. We find that at least one new phase transition arises, in which the Ising spins push the causal set into the crystalline phase.

  1. Securing support for eye health policy in low- and middle-income countries: identifying stakeholders through a multi-level analysis.

    PubMed

    Morone, Piergiuseppe; Camacho Cuena, Eva; Kocur, Ivo; Banatvala, Nicholas

    2014-05-01

    This article empirically evaluates advocacy in low- and middle-income countries as a key tool for raising policy priority and securing high-level decision maker support in eye health. We used a unique data set based on a survey conducted by World Health Organization in 2011 on eye care and prevention of blindness in 82 low- and middle-income countries. The theoretical framework derives from the idea that a plethora of stakeholders at local and global level pressure national governments, acting in economic and the political spheres. Previously, eye care has not been investigated in such a framework. We found structural differences across countries with different income levels and proposed policy recommendations to secure high-level decision makers' support for promoting eye health. Three case studies suggest that, in order to secure more support and resources for eye health, domestic and international stakeholders must strengthen their engagement with ministries of health at political and above all economic levels.

  2. Outbrief - Long Life Rocket Engine Panel

    NASA Technical Reports Server (NTRS)

    Quinn, Jason Eugene

    2004-01-01

    This white paper is an overview of the JANNAF Long Life Rocket Engine (LLRE) Panel results from the last several years of activity. The LLRE Panel has met over the last several years in order to develop an approach for the development of long life rocket engines. Membership for this panel was drawn from a diverse set of the groups currently working on rocket engines (Le. government labs, both large and small companies and university members). The LLRE Panel was formed in order to determine the best way to enable the design of rocket engine systems that have life capability greater than 500 cycles while meeting or exceeding current performance levels (Specific Impulse and Thrust/Weight) with a 1/1,OOO,OOO likelihood of vehicle loss due to rocket system failure. After several meetings and much independent work the panel reached a consensus opinion that the primary issues preventing LLRE are a lack of: physics based life prediction, combined loads prediction, understanding of material microphysics, cost effective system level testing. and the inclusion of fabrication process effects into physics based models. With the expected level of funding devoted to LLRE development, the panel recommended that fundamental research efforts focused on these five areas be emphasized.

  3. Dispersal similarly shapes both population genetics and community patterns in the marine realm

    NASA Astrophysics Data System (ADS)

    Chust, Guillem; Villarino, Ernesto; Chenuil, Anne; Irigoien, Xabier; Bizsel, Nihayet; Bode, Antonio; Broms, Cecilie; Claus, Simon; Fernández de Puelles, María L.; Fonda-Umani, Serena; Hoarau, Galice; Mazzocchi, Maria G.; Mozetič, Patricija; Vandepitte, Leen; Veríssimo, Helena; Zervoudaki, Soultana; Borja, Angel

    2016-06-01

    Dispersal plays a key role to connect populations and, if limited, is one of the main processes to maintain and generate regional biodiversity. According to neutral theories of molecular evolution and biodiversity, dispersal limitation of propagules and population stochasticity are integral to shaping both genetic and community structure. We conducted a parallel analysis of biological connectivity at genetic and community levels in marine groups with different dispersal traits. We compiled large data sets of population genetic structure (98 benthic macroinvertebrate and 35 planktonic species) and biogeographic data (2193 benthic macroinvertebrate and 734 planktonic species). We estimated dispersal distances from population genetic data (i.e., FST vs. geographic distance) and from β-diversity at the community level. Dispersal distances ranked the biological groups in the same order at both genetic and community levels, as predicted by organism dispersal ability and seascape connectivity: macrozoobenthic species without dispersing larvae, followed by macrozoobenthic species with dispersing larvae and plankton (phyto- and zooplankton). This ranking order is associated with constraints to the movement of macrozoobenthos within the seabed compared with the pelagic habitat. We showed that dispersal limitation similarly determines the connectivity degree of communities and populations, supporting the predictions of neutral theories in marine biodiversity patterns.

  4. Out-of-time-order correlators in finite open systems

    NASA Astrophysics Data System (ADS)

    Syzranov, S. V.; Gorshkov, A. V.; Galitski, V.

    2018-04-01

    We study out-of-time-order correlators (OTOCs) of the form for a quantum system weakly coupled to a dissipative environment. Such an open system may serve as a model of, e.g., a small region in a disordered interacting medium coupled to the rest of this medium considered as an environment. We demonstrate that for a system with discrete energy levels the OTOC saturates exponentially ∝∑aie-t /τi+const to a constant value at t →∞ , in contrast with quantum-chaotic systems which exhibit exponential growth of OTOCs. Focusing on the case of a two-level system, we calculate microscopically the decay times τi and the value of the saturation constant. Because some OTOCs are immune to dephasing processes and some are not, such correlators may decay on two sets of parametrically different time scales related to inelastic transitions between the system levels and to pure dephasing processes, respectively. In the case of a classical environment, the evolution of the OTOC can be mapped onto the evolution of the density matrix of two systems coupled to the same dissipative environment.

  5. Adapting to blur produced by ocular high-order aberrations

    PubMed Central

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-01-01

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer’s HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image. PMID:21712375

  6. Adapting to blur produced by ocular high-order aberrations.

    PubMed

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-06-28

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer's HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image.

  7. Hard X-ray photoemission study of the Fabre salts (TMTTF)2X (X = SbF6 and PF6)

    NASA Astrophysics Data System (ADS)

    Medjanik, Katerina; de Souza, Mariano; Kutnyakhov, Dmytro; Gloskovskii, Andrei; Müller, Jens; Lang, Michael; Pouget, Jean-Paul; Foury-Leylekian, Pascale; Moradpour, Alec; Elmers, Hans-Joachim; Schönhense, Gerd

    2014-11-01

    Core-level photoemission spectra of the Fabre salts with X = SbF6 and PF6 were taken using hard X-rays from PETRA III, Hamburg. In these salts TMTTF layers show a significant stack dimerization with a charge transfer of 1 e per dimer to the anion SbF6 or PF6. At room temperature and slightly below the core-level spectra exhibit single lines, characteristic for a well-screened metallic state. At reduced temperatures progressive charge localization sets in, followed by a 2nd order phase transition into a charge-ordered ground state. In both salts groups of new core-level signals occur, shifted towards lower kinetic energies. This is indicative of a reduced transverse-conductivity across the anion layers, visible as layer-dependent charge depletion for both samples. The surface potential was traced via shifts of core-level signals of an adsorbate. A well-defined potential could be established by a conducting cap layer of 5 nm aluminum which appears "transparent" due to the large probing depth of HAXPES (8-10 nm). At the transition into the charge-ordered phase the fluorine 1 s line of (TMTTF)2SbF6 shifts by 2.8 eV to higher binding energy. This is a spectroscopic fingerprint of the loss of inversion symmetry accompanied by a cooperative shift of the SbF6 anions towards the more positively charged TMTTF donors. This shift does not occur for the X = PF6 compound, most likely due to smaller charge disproportion or due to the presence of charge disorder.

  8. Andromonoecy and developmental plasticity in Chaerophyllum bulbosum (Apiaceae–Apioideae)

    PubMed Central

    Reuther, Kerstin; Claßen-Bockhoff, Regine

    2013-01-01

    Background and Aims Andromonoecy, the presence of hermaphrodite and male flowers in the same individual, is genetically fixed or induced, e.g. by fruit set. Little is known about the forces triggering andromonoecy in the Apiaceae. In the present study, a natural population of the protandrous Chaerophyllum bulbosum was investigated to elucidate architectural constraints and effects of resource reallocation. Methods Three sets of plants (each n = 15) were treated by hand pollination, pollinator exclusion and removal of low-order inflorescences. Fifteen untreated plants were left as controls. Key Results Untreated plants produce umbels up to the third branch order, with increasing proportions of male flowers from 15 % (terminal umbel) to 100 % (third-order umbels). Fruit set correspondingly decreases from 70% (terminal umbel) to <10 % (second-order umbels). Insignificant differences from hand-pollinated plants do not reveal any sign of pollinator limitation at the study site. Bagged individuals show the same increase in male flowers with age as untreated plants, indicating that the presence of andromonoecy is not induced by fruit set. After umbel removal, individuals tend to present a higher number of hermaphrodite flowers and fruits in the umbels of second and third order. Three plants (25 %) produced an additional branch order composed of 100 % male umbels. Conclusions Inherited andromonoecy and the plastic response to environmental conditions are interpreted as a self-regulating system saving investment costs and optimizing fruit set at the same time. PMID:23585495

  9. Parietal blood oxygenation level-dependent response evoked by covert visual search reflects set-size effect in monkeys.

    PubMed

    Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P

    2014-03-01

    Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Self-reflection and set-shifting mediate awareness in cognitively preserved schizophrenia patients.

    PubMed

    Gilleen, James; David, Anthony; Greenwood, Kathryn

    2016-05-01

    Poor insight in schizophrenia has been linked to poor cognitive functioning, psychological processes such as denial, or more recently with impaired metacognitive capacity. Few studies, however, have investigated the potential co-dependency of multiple factors in determining level of insight, but such a model is necessary in order to account for patients with good cognitive functioning who have very poor awareness. As evidence suggests that set-shifting and cognitive insight (self-reflection (SR) and self-certainty) are strong predictors of awareness we proposed that these factors are key mediators in the relationship between cognition and awareness. We hypothesised that deficits specifically in SR and set-shifting determine level of awareness in the context of good cognition. Thirty schizophrenia patients were stratified by high and low awareness of illness and executive functioning scores. Cognitive insight, cognition, mood and symptom measures were compared between sub-groups. A low insight/high executive functioning (LI-HE) group, a high insight/high executive functioning (HI-HE) group and a low insight/low executive functioning (LI-LE) group were revealed. As anticipated, the LI-HE patients showed significantly lower capacity for SR and set-shifting than the HI-HE patients. This study indicates that good cognitive functioning is necessary but not sufficient for good awareness; good awareness specifically demands preserved capacity to self-reflect and shift-set. Results support Nelson and Narens' [1990. Metamemory: A theoretical framework and new findings. The Psychology of Learning and Motivation, 26, 125-173] model of metacognition by which awareness is founded on control (set-shifting) and monitoring (SR) processes. These specific factors could be targeted to improve insight in patients with otherwise unimpaired cognitive function.

  11. Conflict resolution styles in the nursing profession.

    PubMed

    Losa Iglesias, Marta Elena; Becerro de Bengoa Vallejo, Ricardo

    2012-12-01

    Managers, including those in nursing environments, may spend much of their time addressing employee conflicts. If not handled properly, conflict may significantly affect employee morale, increase turnover, and even result in litigation, ultimately affecting the overall well-being of the organization. A clearer understanding of the factors that underlie conflict resolution styles could lead to the promotion of better management strategies. The aim of this research was to identify the predominant conflict resolution styles used by a sample of Spanish nurses in two work settings, academic and clinical, in order to determine differences between these environments. The effects of employment level and demographic variables were explored as well. Descriptive cross-sectional survey study. Our sample consisted of professional nurses in Madrid, Spain, who worked in either a university setting or a clinical care setting. Within each of these environments, nurses worked at one of three levels: full professor, assistant professor, or scholarship professor in the academic setting; and nursing supervisor, registered staff nurse, or nursing assistant in the clinical setting. Conflict resolution style was examined using the standardized Thomas-Kilmann Conflict Mode Instrument, a dual-choice questionnaire that assesses a respondent's predominant style of conflict resolution. Five styles are defined: accommodating, avoiding, collaborating, competing, and compromising. Participants were asked to give answers that characterized their dominant response in a conflict situation involving either a superior or a subordinate. Descriptive and inferential statistics were used to examine the relationship between workplace setting and conflict resolution style. The most common style used by nurses overall to resolve workplace conflict was compromising, followed by competing, avoiding, accommodating, and collaborating. There was a significant overall difference in styles between nurses who worked in an academic vs. a clinical setting (p = 0.005), with the greatest difference seen for the accommodating style. Of those nurses for whom accommodation was the primary style, 83% worked in a clinical setting compared to just 17% in an academic setting. Further examination of the difference in conflict-solving approaches between academic and clinical nursing environments might shed light on etiologic factors, which in turn might enable nursing management to institute conflict management interventions that are tailored to specific work environments and adapted to different employment levels. This research increases our understanding of preferred approaches to handling conflict in nursing organizations.

  12. Effects of host social hierarchy on disease persistence.

    PubMed

    Davidson, Ross S; Marion, Glenn; Hutchings, Michael R

    2008-08-07

    The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.

  13. Normality of different orders for Cantor series expansions

    NASA Astrophysics Data System (ADS)

    Airey, Dylan; Mance, Bill

    2017-10-01

    Let S \\subseteq {N} have the property that for each k \\in S the set (S - k) \\cap {N} \\setminus S has asymptotic density 0. We prove that there exists a basic sequence Q where the set of numbers Q-normal of all orders in S but not Q-normal of all orders not in S has full Hausdorff dimension. If the function \

  14. Fixed point theorems for generalized contractions in ordered metric spaces

    NASA Astrophysics Data System (ADS)

    O'Regan, Donal; Petrusel, Adrian

    2008-05-01

    The purpose of this paper is to present some fixed point results for self-generalized contractions in ordered metric spaces. Our results generalize and extend some recent results of A.C.M. Ran, M.C. Reurings [A.C.M. Ran, MEC. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Amer. Math. Soc. 132 (2004) 1435-1443], J.J. Nieto, R. Rodríguez-López [J.J. Nieto, R. Rodríguez-López, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order 22 (2005) 223-239; J.J. Nieto, R. Rodríguez-López, Existence and uniqueness of fixed points in partially ordered sets and applications to ordinary differential equations, Acta Math. Sin. (Engl. Ser.) 23 (2007) 2205-2212], J.J. Nieto, R.L. Pouso, R. Rodríguez-López [J.J. Nieto, R.L. Pouso, R. Rodríguez-López, Fixed point theorem theorems in ordered abstract sets, Proc. Amer. Math. Soc. 135 (2007) 2505-2517], A. Petrusel, I.A. Rus [A. Petrusel, I.A. Rus, Fixed point theorems in ordered L-spaces, Proc. Amer. Math. Soc. 134 (2006) 411-418] and R.P. Agarwal, M.A. El-Gebeily, D. O'Regan [R.P. Agarwal, M.A. El-Gebeily, D. O'Regan, Generalized contractions in partially ordered metric spaces, Appl. Anal., in press]. As applications, existence and uniqueness results for Fredholm and Volterra type integral equations are given.

  15. Risky Business: Factor Analysis of Survey Data – Assessing the Probability of Incorrect Dimensionalisation

    PubMed Central

    van der Eijk, Cees; Rose, Jonathan

    2015-01-01

    This paper undertakes a systematic assessment of the extent to which factor analysis the correct number of latent dimensions (factors) when applied to ordered-categorical survey items (so-called Likert items). We simulate 2400 data sets of uni-dimensional Likert items that vary systematically over a range of conditions such as the underlying population distribution, the number of items, the level of random error, and characteristics of items and item-sets. Each of these datasets is factor analysed in a variety of ways that are frequently used in the extant literature, or that are recommended in current methodological texts. These include exploratory factor retention heuristics such as Kaiser’s criterion, Parallel Analysis and a non-graphical scree test, and (for exploratory and confirmatory analyses) evaluations of model fit. These analyses are conducted on the basis of Pearson and polychoric correlations. We find that, irrespective of the particular mode of analysis, factor analysis applied to ordered-categorical survey data very often leads to over-dimensionalisation. The magnitude of this risk depends on the specific way in which factor analysis is conducted, the number of items, the properties of the set of items, and the underlying population distribution. The paper concludes with a discussion of the consequences of over-dimensionalisation, and a brief mention of alternative modes of analysis that are much less prone to such problems. PMID:25789992

  16. Effect of molecular environment on the vibrational dynamics of pyrimidine bases as analysed by NIS, optical spectroscopy and quantum mechanical force fields

    NASA Astrophysics Data System (ADS)

    Ghomi, M.; Aamouche, A.; Cadioli, B.; Berthier, G.; Grajcar, L.; Baron, M. H.

    1997-06-01

    A complete set of vibrational spectra, obtained from several spectroscopic techniques, i.e. neutron inelastic scattering (NIS), Raman scattering and infrared absorption (IR), has been used in order to assign the vibrational modes of pyrimidine bases (uracil, thymine, cytosine) and their N-deuterated species. The spectra of solid and aqueous samples allowed us to analyse the effects of hydrogen bonding in crystal and in solution. In a first step, to assign the observed vibrational modes, we have resorted to harmonic quantum mechanical force field, calculated at SCF + MP2 level using double-zeta 6-31G and D95V basis sets with non-standard exponents for d-orbital polarisation functions. In order to improve the agreement between the experimental results obtained in condensed phases and the calculated ones based on isolated molecules, the molecular force field has been scaled. In a second step, to estimate the effect of intermolecular interactions on the vibrational dynamics of pyrimidine bases, we have undertaken additional calculations with the density functional theory (DFT) method using B3LYP functionals and polarised 6-31G basis sets. Two theoretical models have been considered: 1. a uracil embedded in a dielectric continuum ( ɛ = 78), and 2. a uracil H-bonded to two water molecules (through N1 and N3 atoms).

  17. Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.

    PubMed

    Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S

    2016-04-14

    The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.

  18. Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2

    NASA Astrophysics Data System (ADS)

    Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.

    2016-04-01

    The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.

  19. A fast, automated, polynomial-based cosmic ray spike-removal method for the high-throughput processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2013-04-01

    Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.

  20. Rogue taxa phenomenon: a biological companion to simulation analysis

    PubMed Central

    Westover, Kristi M.; Rusinko, Joseph P.; Hoin, Jon; Neal, Matthew

    2013-01-01

    To provide a baseline biological comparison to simulation study predictions about the frequency of rogue taxa effects, we evaluated the frequency of a rogue taxa effect using viral data sets which differed in diversity. Using a quartet-tree framework, we measured the frequency of a rogue taxa effect in three data sets of increasing genetic variability (within viral serotype, between viral serotype, and between viral family) to test whether the rogue taxa was correlated with the mean sequence diversity of the respective data sets. We found a slight increase in the percentage of rogues as nucleotide diversity increased. Even though the number of rogues increased with diversity, the distribution of the types of rogues (friendly, crazy, or evil) did not depend on the diversity and in the case of the order-level data set the net rogue effect was slightly positive. This study, assessing frequency of the rogue taxa effect using biological data, indicated that simulation studies may over-predict the prevalence of the rogue taxa effect. Further investigations are necessary to understand which types of data sets are susceptible to a negative rogue effect and thus merit the removal of taxa from large phylogenetic reconstructions. PMID:23707704

  1. Rogue taxa phenomenon: a biological companion to simulation analysis.

    PubMed

    Westover, Kristi M; Rusinko, Joseph P; Hoin, Jon; Neal, Matthew

    2013-10-01

    To provide a baseline biological comparison to simulation study predictions about the frequency of rogue taxa effects, we evaluated the frequency of a rogue taxa effect using viral data sets which differed in diversity. Using a quartet-tree framework, we measured the frequency of a rogue taxa effect in three data sets of increasing genetic variability (within viral serotype, between viral serotype, and between viral family) to test whether the rogue taxa was correlated with the mean sequence diversity of the respective data sets. We found a slight increase in the percentage of rogues as nucleotide diversity increased. Even though the number of rogues increased with diversity, the distribution of the types of rogues (friendly, crazy, or evil) did not depend on the diversity and in the case of the order-level data set the net rogue effect was slightly positive. This study, assessing frequency of the rogue taxa effect using biological data, indicated that simulation studies may over-predict the prevalence of the rogue taxa effect. Further investigations are necessary to understand which types of data sets are susceptible to a negative rogue effect and thus merit the removal of taxa from large phylogenetic reconstructions. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. The Life Threatened Child and the Life Enhancing Clown: Towards a Model of Therapeutic Clowning

    PubMed Central

    Gryski, Camilla

    2008-01-01

    In the last decade, there has been a rapid growth in the presence of clowns in hospitals, particularly in pediatric settings. The proliferation of clowns in health care settings has resulted in varying levels of professionalism and accountability. For this reason, there is a need to examine various forms of clowning, in particular therapeutic clowning in pediatric settings. The purpose of this article is to address what therapeutic clowning is and to describe the extent to which it can provide a complementary form of health care. In an attempt to apply theory to practice, the article will draw upon the experiences of a therapeutic clown within a pediatric setting while providing a historical and theoretical account of how clowns came to be in hospitals. Toward this end, a proposed model of therapeutic clowning will be offered which can be adapted for a variety of settings where children require specialized forms of play in order to enhance their coping, development and adjustment to life changes. Finally, current research on clowning in children's hospitals will be reviewed including a summary of findings from surveys administered at the Hospital for Sick Children. PMID:18317544

  3. Does choice of angular velocity affect pain level during isokinetic strength testing of knee osteoarthritis patients?

    PubMed

    Almosnino, S; Brandon, S C E; Sled, E A

    2012-12-01

    Thigh musculature strength assessment in individuals with knee osteoarthritis is routinely performed in rehabilitative settings. A factor that may influence results is pain experienced during testing. To assess whether pain experienced during isokinetic testing in individuals with knee osteoarthritis is dependent on the angular velocity prescribed. Experimental, repeated measures. University laboratory. Thirty-five individuals (19 women, 16 men) with tibiofemoral osteoarthritis. Participants performed three randomized sets of five maximal concentric extension-flexion repetitions at 60°/s, 90°/s and 120°/s. Pain intensity was measured immediately after the completion of each set. Strength outcomes for each set were the average peak moment. Across gender, pain level was not significantly affected by testing velocity (P=0.18, η(p)(2) =0.05). There was a trend of women reporting more pain than men across all testing velocities, however this comparison did not reach statistical significance (P=0.18, η(p)(2)=0.05). There was a significant main effect of testing velocity on strength, with the highest level attained at 60°/s. However, no difference in strength was noted when testing was performed at 90°/s or 120°/s. A large variation in pain scores within and across conditions and gender was noted, suggesting that at the current stage: 1) isokinetic angular velocity prescription be performed on an individual patient basis; and 2) improvements in the manner pain is recorded are needed in order to reduce the variations in pain scores. Individual prescription of angular velocity may be necessary for optimal strength output and reduction of pain during effort exertion in this patient population.

  4. Setting stroke research priorities: The consumer perspective.

    PubMed

    Sangvatanakul, Pukkaporn; Hillege, Sharon; Lalor, Erin; Levi, Christopher; Hill, Kelvin; Middleton, Sandy

    2010-12-01

    To test a method of engaging consumers in research priority-setting using a quantitative approach and to determine consumer views on stroke research priorities for clinical practice recommendations with lower levels of evidence (Level III and Level IV) and expert consensus opinion as published in the Australian stroke clinical practice guidelines. Survey Urban community Eighteen stroke survivors (n = 12) and carers (n = 6) who were members of the "Working Aged Group - Stroke" (WAGS) consumer support group. Phase I: Participants were asked whether recommendations were "worth" researching ("yes" or "no"); and, if researched, what potential impact they likely would have on patient outcomes. Phase II: Participants were asked to rank recommendations rated by more than 75% of participants in Phase I as "worth" researching and "highly likely" or "likely" to generate research with a significant effect on patient outcomes (n = 13) in order of priority for future stroke research. All recommendations were rated by at least half (n = 9, 50%) of participants as "worth" researching. The majority (67% to 100%) rated all recommendations as "highly likely" or "likely" that research would have a significant effect on patient outcomes. Thirteen out of 20 recommendations were ranked for their research priorities. Recommendations under the topic heading Getting to hospital were ranked highest and Organization of care and Living with stroke were ranked as a lower priority for research. This study provided an example of how to involve consumers in research priority setting successfully using a quantitative approach. Stroke research priorities from the consumer perspective were different from those of health professionals, as published in the literature; thus, consumer opinion should be considered when setting research priorities. Copyright © 2010 Society for Vascular Nursing, Inc. Published by Mosby, Inc. All rights reserved.

  5. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  6. Precision assessment of the orthometric heights determination in northern part of Algeria by combining the GPS data and the local geoid model

    NASA Astrophysics Data System (ADS)

    Benahmed Daho, Sid Ahmed

    2010-02-01

    The main purpose of this article is to discuss the use of GPS positioning together with a gravimetrically determined geoid, for deriving orthometric heights in the North of Algeria, for which a limited number of GPS stations with known orthometric heights are available, and to check, by the same opportunity, the possibility of substituting the classical spirit levelling. For this work, 247 GPS stations which are homogeneously distributed and collected from the international TYRGEONET project, as well as the local GPS/Levelling surveys, have been used. The GPS/Levelling geoidal heights are obtained by connecting the points to the levelling network while gravimetric geoidal heights were interpolated from the geoid model computed by the Geodetic Laboratory of the National Centre of Spatial Techniques from gravity data supplied by BGI. However, and in order to minimise the discordances, systematic errors and datum inconsistencies between the available height data sets, we have tested two parametric models of corrector surface: a four parameter transformation and a third polynomial model are used to find the adequate functional representation of the correction that should be applied to the gravimetric geoid. The comparisons based on these GPS campaigns prove that a good fit between the geoid model and GPS/levelling data has been reached when the third order polynomial was used as corrector surface and that the orthometric heights can be deducted from GPS observations with an accuracy acceptable for the low order levelling network densification. In addition, the adopted methodology has been also applied for the altimetric auscultation of a storage reservoir situated at 40 km from the town of Oran. The comparison between the computed orthometric heights and observed ones allowed us to affirm that the alternative of levelling by GPS is attractive for this auscultation.

  7. Deriving Laws from Ordering Relations

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2003-01-01

    It took much effort in the early days of non-Euclidean geometry to break away from the mindset that all spaces are flat and that two distinct parallel lines do not cross. Up to that point, all that was known was Euclidean geometry, and it was difficult to imagine anything else. We have suffered a similar handicap brought on by the enormous relevance of Boolean algebra to the problems of our age-logic and set theory. Previously, I demonstrated that the algebra of questions is not Boolean, but rather is described by the free distributive algebra. To get to this stage took much effort, as many obstacles-most self-placed-had to be overcome. As Boolean algebras were all I had ever known, it was almost impossible for me to imagine working with an algebra where elements do not have complements. With this realization, it became very clear that the sum and product rules of probability theory at the most basic level had absolutely nothing to do with the Boolean algebra of logical statements. Instead, a measure of degree of inclusion can be invented for many different partially ordered sets, and the sum and product rules fall out of the associativity and distributivity of the algebra. To reinforce this very important idea, this paper will go over how these constructions are made, while focusing on the underlying assumptions. I will derive the sum and product rules for a distributive lattice in general and demonstrate how this leads to probability theory on the Boolean lattice and is related to the calculus of quantum mechanical amplitudes on the partially ordered set of experimental setups. I will also discuss the rules that can be derived from modular lattices and their relevance to the cross-ratio of projective geometry.

  8. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  9. The electronic structure of vanadium monochloride cation (VCl+): Tackling the complexities of transition metal species

    NASA Astrophysics Data System (ADS)

    DeYonker, Nathan J.; Halfen, DeWayne T.; Allen, Wesley D.; Ziurys, Lucy M.

    2014-11-01

    Six electronic states (X 4Σ-, A 4Π, B 4Δ, 2Φ, 2Δ, 2Σ+) of the vanadium monochloride cation (VCl+) are described using large basis set coupled cluster theory. For the two lowest quartet states (X 4Σ- and A 4Π), a focal point analysis (FPA) approach was used that conjoined a correlation-consistent family of basis sets up to aug-cc-pwCV5Z-DK with high-order coupled cluster theory through pentuple (CCSDTQP) excitations. FPA adiabatic excitation energies (T0) and spectroscopic constants (re, r0, Be, B0, bar De, He, ωe, v0, αe, ωexe) were extrapolated to the valence complete basis set Douglas-Kroll (DK) aug-cc-pV∞Z-DK CCSDT level of theory, and additional treatments accounted for higher-order valence electron correlation, core correlation, and spin-orbit coupling. Due to the delicate interplay between dynamical and static electronic correlation, single reference coupled cluster theory is able to provide the correct ground electronic state (X 4Σ-), while multireference configuration interaction theory cannot. Perturbations from the first- and second-order spin orbit coupling of low-lying states with quartet spin multiplicity reveal an immensely complex rotational spectrum relative to the isovalent species VO, VS, and TiCl. Computational data on the doublet manifold suggest that the lowest-lying doublet state (2Γ) has a Te of ˜11 200 cm-1. Overall, this study shows that laboratory and theoretical rotational spectroscopists must work more closely in tandem to better understand the bonding and structure of molecules containing transition metals.

  10. Setting-related influences on physical inactivity of older adults in residential care settings: a review.

    PubMed

    Douma, Johanna G; Volkers, Karin M; Engels, Gwenda; Sonneveld, Marieke H; Goossens, Richard H M; Scherder, Erik J A

    2017-04-28

    Despite the detrimental effects of physical inactivity for older adults, especially aged residents of residential care settings may spend much time in inactive behavior. This may be partly due to their poorer physical condition; however, there may also be other, setting-related factors that influence the amount of inactivity. The aim of this review was to review setting-related factors (including the social and physical environment) that may contribute to the amount of older adults' physical inactivity in a wide range of residential care settings (e.g., nursing homes, assisted care facilities). Five databases were systematically searched for eligible studies, using the key words 'inactivity', 'care facilities', and 'older adults', including their synonyms and MeSH terms. Additional studies were selected from references used in articles included from the search. Based on specific eligibility criteria, a total of 12 studies were included. Quality of the included studies was assessed using the Mixed Methods Appraisal Tool (MMAT). Based on studies using different methodologies (e.g., interviews and observations), and of different quality (assessed quality range: 25-100%), we report several aspects related to the physical environment and caregivers. Factors of the physical environment that may be related to physical inactivity included, among others, the environment's compatibility with the abilities of a resident, the presence of equipment, the accessibility, security, comfort, and aesthetics of the environment/corridors, and possibly the presence of some specific areas. Caregiver-related factors included staffing levels, the available time, and the amount and type of care being provided. Inactivity levels in residential care settings may be reduced by improving several features of the physical environment and with the help of caregivers. Intervention studies could be performed in order to gain more insight into causal effects of improving setting-related factors on physical inactivity of aged residents.

  11. Low back pain in 17 countries, a Rasch analysis of the ICF core set for low back pain.

    PubMed

    Røe, Cecilie; Bautz-Holter, Erik; Cieza, Alarcos

    2013-03-01

    Previous studies indicate that a worldwide measurement tool may be developed based on the International Classification of Functioning Disability and Health (ICF) Core Sets for chronic conditions. The aim of the present study was to explore the possibility of constructing a cross-cultural measurement of functioning for patients with low back pain (LBP) on the basis of the Comprehensive ICF Core Set for LBP and to evaluate the properties of the ICF Core Set. The Comprehensive ICF Core Set for LBP was scored by health professionals for 972 patients with LBP from 17 countries. Qualifier levels of the categories, invariance across age, sex and countries, construct validity and the ordering of the categories in the components of body function, body structure, activities and participation were explored by Rasch analysis. The item-trait χ2-statistics showed that the 53 categories in the ICF Core Set for LBP did not fit the Rasch model (P<0.001). The main challenge was the invariance in the responses according to country. Analysis of the four countries with the largest sample sizes indicated that the data from Germany fit the Rasch model, and the data from Norway, Serbia and Kuwait in terms of the components of body functions and activities and participation also fit the model. The component of body functions and activity and participation had a negative mean location, -2.19 (SD 1.19) and -2.98 (SD 1.07), respectively. The negative location indicates that the ICF Core Set reflects patients with a lower level of function than the present patient sample. The present results indicate that it may be possible to construct a clinical measure of function on the basis of the Comprehensive ICF Core Set for LBP by calculating country-specific scores before pooling the data.

  12. Quantum Stark broadening of Ar XV lines. Strong collision and quadrupolar potential contributions

    NASA Astrophysics Data System (ADS)

    Elabidi, H.; Sahal-Bréchot, S.; Dimitrijević, M. S.

    2014-10-01

    We present in this paper electron impact broadening for six Ar XV lines using our quantum mechanical formalism and the semiclassical perturbation one. Additionally, our calculations of the corresponding atomic structure data (energy levels and oscillator strengths) and collision strengths are given as well. The lines considered here are divided into two sets: a first set of four lines involving the ground level: 1s22s21S0- 1s22snp 1P1o where 2⩽n⩽5 and a second set of two lines involving excited levels: 1s22s2p 1P1o-1s22s3s 1S0 and 1s22s2p 3P0o-1s22s3s 3S1. An extensive comparison between the quantum and the semiclassical results was performed in order to analyze the reason for differences between quantum and semiclassical results up to the factor of two. It has been shown that the difference between the two results may be due to the evaluation of strong collision contributions by the semiclassical formalism. Except few semiclassical results, the present results are the first to be published. After the recent discovery of the far UV lines of Ar VII in the spectra of very hot central stars of planetary nebulae and white dwarfs, the present -and may be further- results can be used also for the corresponding future spectral analysis.

  13. Managing in-hospital quality improvement: An importance-performance analysis to set priorities for ST-elevation myocardial infarction care.

    PubMed

    Aeyels, Daan; Seys, Deborah; Sinnaeve, Peter R; Claeys, Marc J; Gevaert, Sofie; Schoors, Danny; Sermeus, Walter; Panella, Massimiliano; Bruyneel, Luk; Vanhaecht, Kris

    2018-02-01

    A focus on specific priorities increases the success rate of quality improvement efforts for broad and complex-care processes. Importance-performance analysis presents a possible approach to set priorities around which to design and implement effective quality improvement initiatives. Persistent variation in hospital performance makes ST-elevation myocardial infarction care relevant to consider for importance-performance analysis. The purpose of this study was to identify quality improvement priorities in ST-elevation myocardial infarction care. Importance and performance levels of ST-elevation myocardial infarction key interventions were combined in an importance-performance analysis. Content validity indexes on 23 ST-elevation myocardial infarction key interventions of a multidisciplinary RAND Delphi Survey defined importance levels. Structured review of 300 patient records in 15 acute hospitals determined performance levels. The significance of between-hospital variation was determined by a Kruskal-Wallis test. A performance heat-map allowed for hospital-specific priority setting. Seven key interventions were each rated as an overall improvement priority. Priority key interventions related to risk assessment, timely reperfusion by percutaneous coronary intervention and secondary prevention. Between-hospital performance varied significantly for the majority of key interventions. The type and number of priorities varied strongly across hospitals. Guideline adherence in ST-elevation myocardial infarction care is low and improvement priorities vary between hospitals. Importance-performance analysis helps clinicians and management in demarcation of the nature, number and order of improvement priorities. By offering a tailored improvement focus, this methodology makes improvement efforts more specific and achievable.

  14. An integrated brain-behavior model for working memory.

    PubMed

    Moser, D A; Doucet, G E; Ing, A; Dima, D; Schumann, G; Bilder, R M; Frangou, S

    2017-12-05

    Working memory (WM) is a central construct in cognitive neuroscience because it comprises mechanisms of active information maintenance and cognitive control that underpin most complex cognitive behavior. Individual variation in WM has been associated with multiple behavioral and health features including demographic characteristics, cognitive and physical traits and lifestyle choices. In this context, we used sparse canonical correlation analyses (sCCAs) to determine the covariation between brain imaging metrics of WM-network activation and connectivity and nonimaging measures relating to sensorimotor processing, affective and nonaffective cognition, mental health and personality, physical health and lifestyle choices derived from 823 healthy participants derived from the Human Connectome Project. We conducted sCCAs at two levels: a global level, testing the overall association between the entire imaging and behavioral-health data sets; and a modular level, testing associations between subsets of the two data sets. The behavioral-health and neuroimaging data sets showed significant interdependency. Variables with positive correlation to the neuroimaging variate represented higher physical endurance and fluid intelligence as well as better function in multiple higher-order cognitive domains. Negatively correlated variables represented indicators of suboptimal cardiovascular and metabolic control and lifestyle choices such as alcohol and nicotine use. These results underscore the importance of accounting for behavioral-health factors in neuroimaging studies of WM and provide a neuroscience-informed framework for personalized and public health interventions to promote and maintain the integrity of the WM network.Molecular Psychiatry advance online publication, 5 December 2017; doi:10.1038/mp.2017.247.

  15. DHM simulation in virtual environments: a case-study on control room design.

    PubMed

    Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G

    2012-01-01

    This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.

  16. Semantic memory: a feature-based analysis and new norms for Italian.

    PubMed

    Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola

    2013-06-01

    Semantic norms for properties produced by native speakers are valuable tools for researchers interested in the structure of semantic memory and in category-specific semantic deficits in individuals following brain damage. The aims of this study were threefold. First, we sought to extend existing semantic norms by adopting an empirical approach to category (Exp. 1) and concept (Exp. 2) selection, in order to obtain a more representative set of semantic memory features. Second, we extensively outlined a new set of semantic production norms collected from Italian native speakers for 120 artifactual and natural basic-level concepts, using numerous measures and statistics following a feature-listing task (Exp. 3b). Finally, we aimed to create a new publicly accessible database, since only a few existing databases are publicly available online.

  17. On the Performance Evaluation of a MIMO-WCDMA Transmission Architecture for Building Management Systems.

    PubMed

    Tsampasis, Eleftherios; Gkonis, Panagiotis K; Trakadas, Panagiotis; Zahariadis, Theodοre

    2018-01-08

    The goal of this study was to investigate the performance of a realistic wireless sensor nodes deployment in order to support modern building management systems (BMSs). A three-floor building orientation is taken into account, where each node is equipped with a multi-antenna system while a central base station (BS) collects and processes all received information. The BS is also equipped with multiple antennas; hence, a multiple input-multiple output (MIMO) system is formulated. Due to the multiple reflections during transmission in the inner of the building, a wideband code division multiple access (WCDMA) physical layer protocol has been considered, which has already been adopted for third-generation (3G) mobile networks. Results are presented for various MIMO orientations, where the mean transmission power per node is considered as an output metric for a specific signal-to-noise ratio (SNR) requirement and number of resolvable multipath components. In the first set of presented results, the effects of multiple access interference on overall transmission power are highlighted. As the number of mobile nodes per floor or the requested transmission rate increases, MIMO systems of a higher order should be deployed in order to maintain transmission power at adequate levels. In the second set of results, a comparison is performed among transmission in diversity combining and spatial multiplexing mode, which clearly indicate that the first case is the most appropriate solution for indoor communications.

  18. Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita; Jordan, Harry F.

    1989-01-01

    It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.

  19. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    NASA Astrophysics Data System (ADS)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  20. Students' learning as the focus for shared involvement between universities and clinical practice: a didactic model for postgraduate degree projects.

    PubMed

    Öhlén, J; Berg, L; Björk Brämberg, E; Engström, Å; German Millberg, L; Höglund, I; Jacobsson, C; Lepp, M; Lidén, E; Lindström, I; Petzäll, K; Söderberg, S; Wijk, H

    2012-10-01

    In an academic programme, completion of a postgraduate degree project could be a significant means of promoting student learning in evidence- and experience-based practice. In specialist nursing education, which through the European Bologna process would be raised to the master's level, there is no tradition of including a postgraduate degree project. The aim was to develop a didactic model for specialist nursing students' postgraduate degree projects within the second cycle of higher education (master's level) and with a specific focus on nurturing shared involvement between universities and healthcare settings. This study embodies a participatory action research and theory-generating design founded on empirically practical try-outs. The 3-year project included five Swedish universities and related healthcare settings. A series of activities was performed and a number of data sources secured. Constant comparative analysis was applied. A didactic model is proposed for postgraduate degree projects in specialist nursing education aimed at nurturing shared involvement between universities and healthcare settings. The focus of the model is student learning in order to prepare the students for participation as specialist nurses in clinical knowledge development. The model is developed for the specialist nursing education, but it is general and could be applicable to various education programmes.

Top