Sample records for level set approach

  1. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  2. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    NASA Astrophysics Data System (ADS)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  3. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  4. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  5. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  6. Level set methods for detonation shock dynamics using high-order finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Grogan, F. C.; Kolev, T. V.

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two-more » and three-dimensional benchmark problems as well as applications to DSD.« less

  7. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  8. Judgmental Standard Setting Using a Cognitive Components Model.

    ERIC Educational Resources Information Center

    McGinty, Dixie; Neel, John H.

    A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…

  9. Shared decision making within goal setting in rehabilitation settings: A systematic review.

    PubMed

    Rose, Alice; Rosewilliam, Sheeba; Soundy, Andrew

    2017-01-01

    To map out and synthesise literature that considers the extent of shared decision-making (SDM) within goal-setting in rehabilitation settings and explore participants' views of this approach within goal-setting. Four databases were systematically searched between January 2005-September 2015. All articles addressing SDM within goal-setting involving adult rehabilitation patients were included. The literature was critically appraised followed by a thematic synthesis. The search output identified 3129 studies and 15 articles met the inclusion criteria. Themes that emerged related to methods of SDM within goal-setting, participants' views on SDM, perceived benefits of SDM, barriers and facilitators to using SDM and suggestions to improve involvement of patients resulting in a better process of goal-setting. The literature showed various levels of patient involvement existing within goal-setting however few teams adopted an entirely patient-centred approach. However, since the review has identified clear value to consider SDM within goal-setting for rehabilitation, further research is required and practice should consider educating both clinicians and patients about this approach. To enhance the use of SDM within goal-setting in rehabilitation it is likely clinicians and patients will require further education on this approach. For clinicians this could commence during their training at undergraduate level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  11. Propellant Readiness Level: A Methodological Approach to Propellant Characterization

    NASA Technical Reports Server (NTRS)

    Bossard, John A.; Rhys, Noah O.

    2010-01-01

    A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.

  12. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  13. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  14. An Analysis of Approaches to Goal Setting in Middle Grades Personalized Learning Environments

    ERIC Educational Resources Information Center

    DeMink-Carthew, Jessica; Olofson, Mark W.; LeGeros, Life; Netcoh, Steven; Hennessey, Susan

    2017-01-01

    This study investigated the goal-setting approaches of 11 middle grades teachers during the first year of their implementation of a statewide, personalized learning initiative. As an increasing number of middle level schools explore personalized learning, there is an urgent need for empirical research in this area. Goal setting is a critical…

  15. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  16. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  17. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    PubMed

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  18. Level set formulation of two-dimensional Lagrangian vortex detection methods

    NASA Astrophysics Data System (ADS)

    Hadjighasem, Alireza; Haller, George

    2016-10-01

    We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.

  19. Adversarial risk analysis with incomplete information: a level-k approach.

    PubMed

    Rothschild, Casey; McLay, Laura; Guikema, Seth

    2012-07-01

    This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.

  20. CONVERGENT TRANSCRIPTOMICS AND PROTEOMICS OF ENVIRONMENTAL ENRICHMENT AND COCAINE IDENTIFIES NOVEL THERAPEUTIC STRATEGIES FOR ADDICTION

    PubMed Central

    ZHANG, YAFANG; CROFTON, ELIZABETH J.; FAN, XIUZHEN; LI, DINGGE; KONG, FANPING; SINHA, MALA; LUXON, BRUCE A.; SPRATT, HEIDI M.; LICHTI, CHERYL F.; GREEN, THOMAS A.

    2016-01-01

    Transcriptomic and proteomic approaches have separately proven effective at identifying novel mechanisms affecting addiction-related behavior; however, it is difficult to prioritize the many promising leads from each approach. A convergent secondary analysis of proteomic and transcriptomic results can glean additional information to help prioritize promising leads. The current study is a secondary analysis of the convergence of recently published separate transcriptomic and proteomic analyses of nucleus accumbens (NAc) tissue from rats subjected to environmental enrichment vs. isolation and cocaine self-administration vs. saline. Multiple bioinformatics approaches (e.g. Gene Ontology (GO) analysis, Ingenuity Pathway Analysis (IPA), and Gene Set Enrichment Analysis (GSEA)) were used to interrogate these rich data sets. Although there was little correspondence between mRNA vs. protein at the individual target level, good correspondence was found at the level of gene/protein sets, particularly for the environmental enrichment manipulation. These data identify gene sets where there is a positive relationship between changes in mRNA and protein (e.g. glycolysis, ATP synthesis, translation elongation factor activity, etc.) and gene sets where there is an inverse relationship (e.g. ribosomes, Rho GTPase signaling, protein ubiquitination, etc.). Overall environmental enrichment produced better correspondence than cocaine self-administration. The individual targets contributing to mRNA and protein effects were largely not overlapping. As a whole, these results confirm that robust transcriptomic and proteomic data sets can provide similar results at the gene/protein set level even when there is little correspondence at the individual target level and little overlap in the targets contributing to the effects. PMID:27717806

  1. Long-Duration Environmentally-Adaptive Autonomous Rigorous Naval Systems

    DTIC Science & Technology

    2015-09-30

    equations). The accuracy of the DO level-set equations for solving the governing stochastic level-set reachability fronts was first verified in part by...reachable set contours computed by DO and MC. We see that it is less than the spatial resolution used, indicating our DO solutions are accurate. We solved ...the interior of the sensors’ reachable sets, all the physically impossible trajectories are immediately ruled out. However, this approach is myopic

  2. Preparing mental health professionals for new directions in mental health practice: Evaluating the sensory approaches e-learning training package.

    PubMed

    Meredith, Pamela; Yeates, Harriet; Greaves, Amanda; Taylor, Michelle; Slattery, Maddy; Charters, Michelle; Hill, Melissa

    2018-02-01

    The application of sensory modulation approaches in mental health settings is growing in recognition internationally. However, a number of barriers have been identified as limiting the implementation of the approach, including workplace culture and a lack of accessible and effective sensory approaches training. The aim of this project was to investigate the efficacy of providing this training through a custom-designed e-learning package. Participants in the present study were predominately nurses and occupational therapists working in mental health settings in Queensland, Australia. Data were collected from 121 participants using an online survey. Significant improvements were found between pre- and post-training in participants' real and perceived levels of knowledge, their perceived levels of confidence, and their attitudes towards using sensory modulation approaches in mental health settings. The findings of the study suggest that the custom-designed sensory approaches e-learning package is an effective, accessible, acceptable, and usable method to train health professionals in sensory modulation approaches. As this study is the first to analyse the efficacy of an e-learning sensory approaches package, the results are considered preliminary, and further investigation is required. © 2017 Australian College of Mental Health Nurses Inc.

  3. Integrating Compact Constraint and Distance Regularization with Level Set for Hepatocellular Carcinoma (HCC) Segmentation on Computed Tomography (CT) Images

    NASA Astrophysics Data System (ADS)

    Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping

    2017-01-01

    This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.

  4. Extending fields in a level set method by solving a biharmonic equation

    NASA Astrophysics Data System (ADS)

    Moroney, Timothy J.; Lusmore, Dylan R.; McCue, Scott W.; McElwain, D. L. Sean

    2017-08-01

    We present an approach for computing extensions of velocities or other fields in level set methods by solving a biharmonic equation. The approach differs from other commonly used approaches to velocity extension because it deals with the interface fully implicitly through the level set function. No explicit properties of the interface, such as its location or the velocity on the interface, are required in computing the extension. These features lead to a particularly simple implementation using either a sparse direct solver or a matrix-free conjugate gradient solver. Furthermore, we propose a fast Poisson preconditioner that can be used to accelerate the convergence of the latter. We demonstrate the biharmonic extension on a number of test problems that serve to illustrate its effectiveness at producing smooth and accurate extensions near interfaces. A further feature of the method is the natural way in which it deals with symmetry and periodicity, ensuring through its construction that the extension field also respects these symmetries.

  5. Graphical Methods for Reducing, Visualizing and Analyzing Large Data Sets Using Hierarchical Terminologies

    PubMed Central

    Jing, Xia; Cimino, James J.

    2011-01-01

    Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119

  6. 3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement

    DOE PAGES

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    2017-03-02

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  7. Determining and broadening the definition of impact from implementing a rational priority setting approach in a healthcare organization.

    PubMed

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2014-08-01

    Techniques to manage scarce healthcare resources continue to evolve in response to changing, growing and competing demands. Yet there is no standard definition in the priority setting literature of what might constitute the desired impact or success of resource management activities. In this 2006-09 study, using action research methodology, we determined the impact of implementing a formal priority setting model, Program Budgeting and Marginal Analysis (PBMA), in a Canadian health authority. Qualitative data were collected through post year-1 (n = 12) and year-2 (n = 9) participant interviews, meeting observation and document review. Interviews were analyzed using a constant comparison technique to identify major themes. Impact can be defined as effects at three levels: system, group, and individual. System-level impact can be seen in the actual selection of priorities and resource re-allocation. In this case, participants prioritized a list of $760,000 worth of investment proposals and $38,000 of disinvestment proposals; however, there was no clear evidence as to whether financial resources were reallocated as a result. Group and individual impacts, less frequently reported in the literature, included changes in priority setting knowledge, attitudes and practice. PBMA impacts at these three levels were found to be interrelated. This work argues in favor of attempts to expand the definition of priority setting success by including both desired system-level outcomes like resource re-allocation and individual or group level impacts like changes to priority setting knowledge, attitudes and practice. These latter impacts are worth pursuing as they appear to be intrinsic to successful system-wide priority setting. A broader definition of PBMA impact may also suggest conceptualizing PBMA as both a priority setting approach and as a tool to develop individual and group priority setting knowledge and practice. These results should be of interest to researchers and decision makers using or considering a formal priority setting approach to manage scarce healthcare resources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  10. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  11. Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azarm, M.A.; Hall, R.E.

    1982-01-01

    A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  13. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    NASA Astrophysics Data System (ADS)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  14. Answer Sets in a Fuzzy Equilibrium Logic

    NASA Astrophysics Data System (ADS)

    Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine

    Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.

  15. [Settings-based prevention of overweight in childhood and adolescents : Theoretical foundation, determinants and intervention planning].

    PubMed

    Quilling, Eike; Dadaczynski, Kevin; Müller, Merle

    2016-11-01

    Childhood and adolescent overweight can still be seen as a global public health problem. Based on our socioeconomic understanding, overweight is the result of a complex interplay of a diverse array of factors acting on different levels. Hence, in addition to individual level determinants overweight prevention should also address environmental related factors as part of a holistic and integrated setting approach. This paper aims to discuss the setting approach with regard to overweight prevention in childhood and adolescence. In addition to a summary of environmental factors and their empirical influence on the determinants of overweight, theoretical approaches and planning models of settings-based overweight prevention are discussed. While settings can be characterized as specific social-spatial subsystems (e. g. kindergarten, schools), living environments relate to complex subject-oriented environments that may include various subsystems. Direct social contexts, educational contexts and community contexts as relevant systems for young people contain different evidence-based influences that need to be taken into account in settings based overweight prevention. To support a theory-driven intervention, numerous planning models exist, which are presented here. Given the strengthening of environments for health within the prevention law, the underlying settings approach also needs further development with regard to overweigth prevention. This includes the improvement of the theoretical foundation by aligning intervention practice of planning models, which also has a positive influence on the ability to measure its success.

  16. GeneTopics - interpretation of gene sets via literature-driven topic models

    PubMed Central

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly tied to relevant articles in the literature. Extending a general topic model method, the approach introduced here establishes a workflow for the interpretation of gene sets generated from diverse experimental scenarios that can complement the classical approach of comparison to reference gene sets. PMID:24564875

  17. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  18. Achievement goals, social goals, and motivational regulations in physical education settings.

    PubMed

    Cecchini Estrada, José A; González González-Mesa, Carmen; Méndez-Giménez, Antonio; Fernández-Río, Javier

    2011-02-01

    This study examined the relationship between achievement and social goals, and explored how both goals affect students' level of informed self-determination in Physical Education. Participants were 395 high school students. Three scales were used to assess achievement, social goals, and motivation. Several hierarchical regression analyses revealed that mastery-approach goals were the greatest contributors to the individuals' levels of self-determination. Achievement and social goals were found to be separate predictors of students' levels of self-determination, and this highlights the importance of separating mastery and performance goals into avoidance and approach profiles. Girls reported significantly higher values than boys on responsibility, relationship, and mastery-avoidance goals, whereas boys scored higher on performance-approach goals. Researchers could use achievement and social goals to study students' motivation and achievement in Physical Education settings.

  19. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    PubMed

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.

  20. Setting research priorities by applying the combined approach matrix.

    PubMed

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  1. Acceptable health and priority weighting: Discussing a reference-level approach using sufficientarian reasoning.

    PubMed

    Wouters, S; van Exel, N J A; Rohde, K I M; Vromen, J J; Brouwer, W B F

    2017-05-01

    Health care systems are challenged in allocating scarce health care resources, which are typically insufficient to fulfil all health care wants and needs. One criterion for priority setting may be the 'acceptable health' approach, which suggests that society may want to assign higher priority to health benefits in people with "unacceptable" than in people with "acceptable" health. A level of acceptable health then serves as a reference point for priority setting. Empirical research has indicated that people may be able and willing to define health states as "unacceptable" or "acceptable", but little attention has been given to the normative implications of evaluating health benefits in relation to a reference level of acceptable health. The current paper aims to address this gap by relating insights from the distributive justice literature, i.e. the sufficientarian literature, to the acceptable health approach, as we argue that these approaches are related. We specifically focus on the implications of an 'acceptability' approach for priority weighting of health benefits, derived from sufficientarian reasoning and debates, and assess the moral implications of such weighting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  3. Efficient level set methods for constructing wavefronts in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Tien

    2007-10-01

    Wavefront construction in geometrical optics has long faced the twin difficulties of dealing with multi-valued forms and resolution of wavefront surfaces. A recent change in viewpoint, however, has demonstrated that working in phase space on bicharacteristic strips using eulerian methods can bypass both difficulties. The level set method for interface dynamics makes a suitable choice for the eulerian method. Unfortunately, in three-dimensional space, the setting of interest for most practical applications, the advantages of this method are largely offset by a new problem: the high dimension of phase space. In this work, we present new types of level set algorithms that remove this obstacle and demonstrate their abilities to accurately construct wavefronts under high resolution. These results propel the level set method forward significantly as a competitive approach in geometrical optics under realistic conditions.

  4. Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Sahraei, S.

    2016-12-01

    Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.

  5. An arbitrary-order Runge–Kutta discontinuous Galerkin approach to reinitialization for banded conservative level sets

    DOE PAGES

    Jibben, Zechariah Joel; Herrmann, Marcus

    2017-08-24

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  6. Numerical Simulation of Dynamic Contact Angles and Contact Lines in Multiphase Flows using Level Set Method

    NASA Astrophysics Data System (ADS)

    Pendota, Premchand

    Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.

  7. An efficient mass-preserving interface-correction level set/ghost fluid method for droplet suspensions under depletion forces

    NASA Astrophysics Data System (ADS)

    Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca

    2018-01-01

    Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.

  8. Level-Set Topology Optimization with Aeroelastic Constraints

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  9. Bartnik’s splitting conjecture and Lorentzian Busemann function

    NASA Astrophysics Data System (ADS)

    Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof

    2018-05-01

    In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.

  10. Hybrid method for moving interface problems with application to the Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, T.Y.; Li, Zhilin; Osher, S.

    In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less

  11. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  12. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    NASA Astrophysics Data System (ADS)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  13. Development of a coupled level set and immersed boundary method for predicting dam break flows

    NASA Astrophysics Data System (ADS)

    Yu, C. H.; Sheu, Tony W. H.

    2017-12-01

    Dam-break flow over an immersed stationary object is investigated using a coupled level set (LS)/immersed boundary (IB) method developed in Cartesian grids. This approach adopts an improved interface preserving level set method which includes three solution steps and the differential-based interpolation immersed boundary method to treat fluid-fluid and solid-fluid interfaces, respectively. In the first step of this level set method, the level set function ϕ is advected by a pure advection equation. The intermediate step is performed to obtain a new level set value through a new smoothed Heaviside function. In the final solution step, a mass correction term is added to the re-initialization equation to ensure the new level set is a distance function and to conserve the mass bounded by the interface. For accurately calculating the level set value, the four-point upwinding combined compact difference (UCCD) scheme with three-point boundary combined compact difference scheme is applied to approximate the first-order derivative term shown in the level set equation. For the immersed boundary method, application of the artificial momentum forcing term at points in cells consisting of both fluid and solid allows an imposition of velocity condition to account for the presence of solid object. The incompressible Navier-Stokes solutions are calculated using the projection method. Numerical results show that the coupled LS/IB method can not only predict interface accurately but also preserve the mass conservation excellently for the dam-break flow.

  14. Topology optimization in acoustics and elasto-acoustics via a level-set method

    NASA Astrophysics Data System (ADS)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  15. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The Thick Level-Set model for dynamic fragmentation

    DOE PAGES

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    2017-01-04

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  17. Automatic Rooftop Extraction in Stereo Imagery Using Distance and Building Shape Regularized Level Set Evolution

    NASA Astrophysics Data System (ADS)

    Tian, J.; Krauß, T.; d'Angelo, P.

    2017-05-01

    Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  18. Interfaces and hydrophobic interactions in receptor-ligand systems: A level-set variational implicit solvent approach.

    PubMed

    Cheng, Li-Tien; Wang, Zhongming; Setny, Piotr; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew

    2009-10-14

    A model nanometer-sized hydrophobic receptor-ligand system in aqueous solution is studied by the recently developed level-set variational implicit solvent model (VISM). This approach is compared to all-atom computer simulations. The simulations reveal complex hydration effects within the (concave) receptor pocket, sensitive to the distance of the (convex) approaching ligand. The ligand induces and controls an intermittent switching between dry and wet states of the hosting pocket, which determines the range and magnitude of the pocket-ligand attraction. In the level-set VISM, a geometric free-energy functional of all possible solute-solvent interfaces coupled to the local dispersion potential is minimized numerically. This approach captures the distinct metastable states that correspond to topologically different solute-solvent interfaces, and thereby reproduces the bimodal hydration behavior observed in the all-atom simulation. Geometrical singularities formed during the interface relaxation are found to contribute significantly to the energy barrier between different metastable states. While the hydration phenomena can thus be explained by capillary effects, the explicit inclusion of dispersion and curvature corrections seems to be essential for a quantitative description of hydrophobically confined systems on nanoscales. This study may shed more light onto the tight connection between geometric and energetic aspects of biomolecular hydration and may represent a valuable step toward the proper interpretation of experimental receptor-ligand binding rates.

  19. Numerical Simulation of Hydrodynamics of a Heavy Liquid Drop Covered by Vapor Film in a Water Pool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, W.M.; Yang, Z.L.; Giri, A.

    2002-07-01

    A numerical study on the hydrodynamics of a droplet covered by vapor film in water pool is carried out. Two level set functions are used as to implicitly capture the interfaces among three immiscible fluids (melt-drop, vapor and coolant). This approach leaves only one set of conservation equations for the three phases. A high-order Navier-Stokes solver, called Cubic-Interpolated Pseudo-Particle (CIP) algorithm, is employed in combination with level set approach, which allows large density ratios (up to 1000), surface tension and jump in viscosity. By this calculation, the hydrodynamic behavior of a melt droplet falling into a volatile coolant is simulated,more » which is of great significance to reveal the mechanism of steam explosion during a hypothetical severe reactor accident. (authors)« less

  20. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features

    PubMed Central

    Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo

    2017-01-01

    Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024  pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314

  1. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.

    PubMed

    Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel

    2011-05-23

    Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.

  2. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    PubMed Central

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2016-01-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546

  3. Students' Adoption of Course-Specific Approaches to Learning in Two Parallel Courses

    ERIC Educational Resources Information Center

    Öhrstedt, Maria; Lindfors, Petra

    2016-01-01

    Research on students' adoption of course-specific approaches to learning in parallel courses is limited and inconsistent. This study investigated second-semester psychology students' levels of deep, surface and strategic approaches in two courses running in parallel within a real-life university setting. The results showed significant differences…

  4. A Top-Down Approach to Designing the Computerized Adaptive Multistage Test

    ERIC Educational Resources Information Center

    Luo, Xiao; Kim, Doyoung

    2018-01-01

    The top-down approach to designing a multistage test is relatively understudied in the literature and underused in research and practice. This study introduced a route-based top-down design approach that directly sets design parameters at the test level and utilizes the advanced automated test assembly algorithm seeking global optimality. The…

  5. Leadership through Partnership: A Collaborative, Strengths-Based Approach to Strategic Planning

    ERIC Educational Resources Information Center

    Randolph, Loretta H.

    2006-01-01

    Organization sustainability depends on the creation of innovative partnerships that engage organization members at all levels in setting strategy and achieving goals. Appreciative Inquiry (AI) is an approach to exploring what gives life to human systems when they function at their best. The use of an AI approach to strategic planning, invites…

  6. Modular Approach for Ethics

    ERIC Educational Resources Information Center

    Wyne, Mudasser F.

    2010-01-01

    It is hard to define a single set of ethics that will cover an entire computer users community. In this paper, the issue is addressed in reference to code of ethics implemented by various professionals, institutes and organizations. The paper presents a higher level model using hierarchical approach. The code developed using this approach could be…

  7. Nested Tracking Graphs

    DOE PAGES

    Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...

    2017-06-01

    Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less

  8. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  9. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.

    PubMed

    Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo

    2017-01-01

    We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  10. Communication: Density functional theory embedding with the orthogonality constrained basis set expansion procedure

    NASA Astrophysics Data System (ADS)

    Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon

    2017-06-01

    Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.

  11. Priority setting in healthcare: towards guidelines for the program budgeting and marginal analysis framework.

    PubMed

    Peacock, Stuart J; Mitton, Craig; Ruta, Danny; Donaldson, Cam; Bate, Angela; Hedden, Lindsay

    2010-10-01

    Economists' approaches to priority setting focus on the principles of opportunity cost, marginal analysis and choice under scarcity. These approaches are based on the premise that it is possible to design a rational priority setting system that will produce legitimate changes in resource allocation. However, beyond issuing guidance at the national level, economic approaches to priority setting have had only a moderate impact in practice. In particular, local health service organizations - such as health authorities, health maintenance organizations, hospitals and healthcare trusts - have had difficulty implementing evidence from economic appraisals. Yet, in the context of making decisions between competing claims on scarce health service resources, economic tools and thinking have much to offer. The purpose of this article is to describe and discuss ten evidence-based guidelines for the successful design and implementation of a program budgeting and marginal analysis (PBMA) priority setting exercise. PBMA is a framework that explicitly recognizes the need to balance pragmatic and ethical considerations with economic rationality when making resource allocation decisions. While the ten guidelines are drawn from the PBMA framework, they may be generalized across a range of economic approaches to priority setting.

  12. Bioinformatics/biostatistics: microarray analysis.

    PubMed

    Eichler, Gabriel S

    2012-01-01

    The quantity and complexity of the molecular-level data generated in both research and clinical settings require the use of sophisticated, powerful computational interpretation techniques. It is for this reason that bioinformatic analysis of complex molecular profiling data has become a fundamental technology in the development of personalized medicine. This chapter provides a high-level overview of the field of bioinformatics and outlines several, classic bioinformatic approaches. The highlighted approaches can be aptly applied to nearly any sort of high-dimensional genomic, proteomic, or metabolomic experiments. Reviewed technologies in this chapter include traditional clustering analysis, the Gene Expression Dynamics Inspector (GEDI), GoMiner (GoMiner), Gene Set Enrichment Analysis (GSEA), and the Learner of Functional Enrichment (LeFE).

  13. Biomedical image segmentation using geometric deformable models and metaheuristics.

    PubMed

    Mesejo, Pablo; Valsecchi, Andrea; Marrakchi-Kacem, Linda; Cagnoni, Stefano; Damas, Sergio

    2015-07-01

    This paper describes a hybrid level set approach for medical image segmentation. This new geometric deformable model combines region- and edge-based information with the prior shape knowledge introduced using deformable registration. Our proposal consists of two phases: training and test. The former implies the learning of the level set parameters by means of a Genetic Algorithm, while the latter is the proper segmentation, where another metaheuristic, in this case Scatter Search, derives the shape prior. In an experimental comparison, this approach has shown a better performance than a number of state-of-the-art methods when segmenting anatomical structures from different biomedical image modalities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Data Policy Construction Set - Building Blocks from Childhood Constructions

    NASA Astrophysics Data System (ADS)

    Fleischer, Dirk; Paul-Stueve, Thilo; Jobmann, Alexandra; Farrenkopf, Stefan

    2016-04-01

    A complete construction set of building blocks usually comes with instructions and these instruction include building stages. The products of these building stages usually build from very general parts become highly specialized building parts for very unique features of the whole construction model. This sounds very much like the construction or organization of an interdisciplinary research project, institution or association, doesn't it! The creation process of an overarching data policy for a project group or institution is exactly the combination of individual interests with the common goal of a collaborative data policy and can be compared with the building stages of a construction set of building blocks and the building instructions. Keeping this in mind we created the data policy construction set of textual building blocks. This construction set is subdivided into several building stages or parts each containing multiple building blocks as text blocks. By combining building blocks of all subdivisions it is supposed to create a cascading data policy document. Cascading from the top level as a construction set provider for all further down existing levels such as project, themes, work packages or Universities, faculties, institutes down to the working level of working groups. The working groups are picking from the remaining building blocks in the provided construction set the suitable blocks for its working procedures to create a very specific policy from the available construction set provided by the top level community. Nevertheless, if a working group realized that there are missing building blocks or worse that there are missing building parts, then they have the chance to add the missing pieces to the construction set of direct an future use. This cascading approach enables project or institution wide application of the encoded rules from the textual level on access to data storage infrastructure. This structured approach is flexible enough to allow for the fact that interdisciplinary research projects always bring together very diverse amount of working habits, methods and requirements. All these need to be considered for the creation of the general document on data sharing and research data management. This approach focused on the recommendation of the RDA practical policy working group to implement practical policies derived from the textual level. Therefore it aims to move the data policy creation procedure and implementation towards the consortium or institutional formation with all the benefits of an existing data policy construction set already during the proposal creation and proposal review. Picking up the metaphor of real building blocks in context of data policies provides also the insight that existing building blocks and building parts can be reused as they are, but also can be redesigned with very little changes or a full overhaul.

  15. A comprehensive dwelling unit choice model accommodating psychological constructs within a search strategy for consideration set formation.

    DOT National Transportation Integrated Search

    2015-12-01

    This study adopts a dwelling unit level of analysis and considers a probabilistic choice set generation approach for residential choice modeling. In doing so, we accommodate the fact that housing choices involve both characteristics of the dwelling u...

  16. Segmentation Using Multispectral Adaptive Contours

    DTIC Science & Technology

    2004-02-29

    Geometry, University of Toronto Press, 1959. 13. R . Malladi , J. Sethian, “Image Processing via Level Set Curvature Flow,” National Academy of Science, vol...92, pp. 7046, 1995. 14. R . Malladi , J. Sethian, C. Vemuri, "Shape Modeling with Front Propagation: a Level Set Approach," IEEE Transactions on...boundary-based active contour models are reviewed in this report; geometric active contours proposed by Caselles et al. [2] and by Malladi and Sethian [13

  17. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    PubMed Central

    Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.

    2013-01-01

    In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592

  18. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.

    PubMed

    Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G

    2013-01-01

    In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.

  19. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    PubMed

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  20. Multi person detection and tracking based on hierarchical level-set method

    NASA Astrophysics Data System (ADS)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  1. Quantum-chemical investigation of the structures and electronic spectra of the nucleic acid bases at the coupled cluster CC2 level.

    PubMed

    Fleig, Timo; Knecht, Stefan; Hättig, Christof

    2007-06-28

    We study the ground-state structures and singlet- and triplet-excited states of the nucleic acid bases by applying the coupled cluster model CC2 in combination with a resolution-of-the-identity approximation for electron interaction integrals. Both basis set effects and the influence of dynamic electron correlation on the molecular structures are elucidated; the latter by comparing CC2 with Hartree-Fock and Møller-Plesset perturbation theory to second order. Furthermore, we investigate basis set and electron correlation effects on the vertical excitation energies and compare our highest-level results with experiment and other theoretical approaches. It is shown that small basis sets are insufficient for obtaining accurate results for excited states of these molecules and that the CC2 approach to dynamic electron correlation is a reliable and efficient tool for electronic structure calculations on medium-sized molecules.

  2. Automated extraction of decision rules for leptin dynamics--a rough sets approach.

    PubMed

    Brtka, Vladimir; Stokić, Edith; Srdić, Biljana

    2008-08-01

    A significant area in the field of medical informatics is concerned with the learning of medical models from low-level data. The goals of inducing models from data are twofold: analysis of the structure of the models so as to gain new insight into the unknown phenomena, and development of classifiers or outcome predictors for unseen cases. In this paper, we will employ approach based on the relation of indiscernibility and rough sets theory to study certain questions concerning the design of model based on if-then rules, from low-level data including 36 parameters, one of them leptin. To generate easy to read, interpret, and inspect model, we have used ROSETTA software system. The main goal of this work is to get new insight into phenomena of leptin levels while interplaying with other risk factors in obesity.

  3. Gene Selection and Cancer Classification: A Rough Sets Based Approach

    NASA Astrophysics Data System (ADS)

    Sun, Lijun; Miao, Duoqian; Zhang, Hongyun

    Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.

  4. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  5. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  6. What do District Health Planners in Tanzania think about improving priority setting using 'Accountability for Reasonableness'?

    PubMed Central

    Mshana, Simon; Shemilu, Haji; Ndawi, Benedict; Momburi, Roman; Olsen, Oystein Evjen; Byskov, Jens; Martin, Douglas K

    2007-01-01

    Background Priority setting in every health system is complex and difficult. In less wealthy countries the dominant approach to priority setting has been Burden of Disease (BOD) and cost-effectiveness analysis (CEA), which is helpful, but insufficient because it focuses on a narrow range of values – need and efficiency – and not the full range of relevant values, including legitimacy and fairness. 'Accountability for reasonableness' is a conceptual framework for legitimate and fair priority setting and is empirically based and ethically justified. It connects priority setting to broader, more fundamental, democratic deliberative processes that have an impact on social justice and equity. Can 'accountability for reasonableness' be helpful for improving priority setting in less wealthy countries? Methods In 2005, Tanzanian scholars from the Primary Health Care Institute (PHCI) conducted 6 capacity building workshops with senior health staff, district planners and managers, and representatives of the Tanzanian Ministry of Health to discussion improving priority setting in Tanzania using 'accountability for reasonableness'. The purpose of this paper is to describe this initiative and the participants' views about the approach. Results The approach to improving priority setting using 'accountability for reasonableness' was viewed by district decision makers with enthusiastic favour because it was the first framework that directly addressed their priority setting concerns. High level Ministry of Health participants were also very supportive of the approach. Conclusion Both Tanzanian district and governmental health planners viewed the 'accountability for reasonableness' approach with enthusiastic favour because it was the first framework that directly addressed their concerns. PMID:17997824

  7. The Heats of Formation of GaCl3 and its Fragments

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Arnold, James (Technical Monitor)

    1998-01-01

    The heats of formation of GaC13 and its fragments are computed. The geometries and frequencies are obtained at the B3LYP level. The CCSD(T) approach is used to solve the correlation problem. The effect of Ga 3d correlation is studied, and found to affect the bond energies by up to 1 kcal/mol. Both basis set extrapolation and bond functions are considered as ways to approach the basis set limit. Spin-orbit and scalar relativistic effects are also considered.

  8. Measuring Afterschool Program Quality Using Setting-Level Observational Approaches

    ERIC Educational Resources Information Center

    Oh, Yoonkyung; Osgood, D. Wayne; Smith, Emilie P.

    2015-01-01

    The importance of afterschool hours for youth development is widely acknowledged, and afterschool settings have recently received increasing attention as an important venue for youth interventions, bringing a growing need for reliable and valid measures of afterschool quality. This study examined the extent to which the two observational tools,…

  9. Process Evaluation Results from an Environmentally Focused Worksite Weight Management Study

    ERIC Educational Resources Information Center

    DeJoy, David M.; Wilson, Mark G.; Padilla, Heather M.; Goetzel, Ron Z.; Parker, Kristin B.; Della, Lindsay J.; Roemer, Enid C.

    2012-01-01

    There is currently much interest in exploring environmental approaches to combat weight gain and obesity. This study presents process evaluation results from a workplace-based study that tested two levels of environmentally focused weight management interventions in a manufacturing setting. The moderate treatment featured a set of relatively…

  10. Quality Measurement in Early Childhood Settings

    ERIC Educational Resources Information Center

    Zaslow, Martha, Ed.; Martinez-Beck, Ivelisse, Ed.; Tout, Kathryn, Ed.; Halle, Tamara, Ed.

    2011-01-01

    What constitutes quality in early childhood settings, and how can it best be measured with today's widely used tools and promising new approaches? Find authoritative answers in this book, a must-have for high-level administrators and policymakers as more and more states adopt early childhood Quality Rating and Improvement Systems. The most…

  11. Using the gene ontology to scan multilevel gene sets for associations in genome wide association studies.

    PubMed

    Schaid, Daniel J; Sinnwell, Jason P; Jenkins, Gregory D; McDonnell, Shannon K; Ingle, James N; Kubo, Michiaki; Goss, Paul E; Costantino, Joseph P; Wickerham, D Lawrence; Weinshilboum, Richard M

    2012-01-01

    Gene-set analyses have been widely used in gene expression studies, and some of the developed methods have been extended to genome wide association studies (GWAS). Yet, complications due to linkage disequilibrium (LD) among single nucleotide polymorphisms (SNPs), and variable numbers of SNPs per gene and genes per gene-set, have plagued current approaches, often leading to ad hoc "fixes." To overcome some of the current limitations, we developed a general approach to scan GWAS SNP data for both gene-level and gene-set analyses, building on score statistics for generalized linear models, and taking advantage of the directed acyclic graph structure of the gene ontology when creating gene-sets. However, other types of gene-set structures can be used, such as the popular Kyoto Encyclopedia of Genes and Genomes (KEGG). Our approach combines SNPs into genes, and genes into gene-sets, but assures that positive and negative effects of genes on a trait do not cancel. To control for multiple testing of many gene-sets, we use an efficient computational strategy that accounts for LD and provides accurate step-down adjusted P-values for each gene-set. Application of our methods to two different GWAS provide guidance on the potential strengths and weaknesses of our proposed gene-set analyses. © 2011 Wiley Periodicals, Inc.

  12. Simultaneous confidence sets for several effective doses.

    PubMed

    Tompsett, Daniel M; Biedermann, Stefanie; Liu, Wei

    2018-04-03

    Construction of simultaneous confidence sets for several effective doses currently relies on inverting the Scheffé type simultaneous confidence band, which is known to be conservative. We develop novel methodology to make the simultaneous coverage closer to its nominal level, for both two-sided and one-sided simultaneous confidence sets. Our approach is shown to be considerably less conservative than the current method, and is illustrated with an example on modeling the effect of smoking status and serum triglyceride level on the probability of the recurrence of a myocardial infarction. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Measuring fecundity with standardised estimates of expected pregnancies.

    PubMed

    Mikolajczyk, Rafael T; Stanford, Joseph B

    2006-11-01

    Approaches to measuring fecundity include the assessment of time to pregnancy and day-specific probabilities of conception (daily fecundities) indexed to a day of ovulation. In this paper, we develop an additional approach of calculating expected pregnancies based on daily fecundities indexed to the last day of the menstrual cycle. Expected pregnancies can thus be calculated while controlling for frequency and timing of coitus. Comparing observed pregnancies with expected pregnancies allows for a standardised comparison of fecundity between studies or groups within studies, and can be used to assess the effects of categorical covariates on the woman or couple level, and also on the cycle level. This can be accomplished in a minimal data set that does not necessarily require hormonal measurement or the explicit identification of ovulation. We demonstrate this approach by examining the effects of age and parity on fecundity in a data set from women monitoring their fertility cycles with the Creighton Model FertilityCare System.

  14. Pedagogical Approaches for Technology-Integrated Science Teaching

    ERIC Educational Resources Information Center

    Hennessy, Sara; Wishart, Jocelyn; Whitelock, Denise; Deaney, Rosemary; Brawn, Richard; la Velle, Linda; McFarlane, Angela; Ruthven, Kenneth; Winterbottom, Mark

    2007-01-01

    The two separate projects described have examined how teachers exploit computer-based technologies in supporting learning of science at secondary level. This paper examines how pedagogical approaches associated with these technological tools are adapted to both the cognitive and structuring resources available in the classroom setting. Four…

  15. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    PubMed Central

    2011-01-01

    Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357

  16. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  17. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  18. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  19. Level-set simulations of soluble surfactant driven flows

    NASA Astrophysics Data System (ADS)

    Cleret de Langavant, Charles; Guittet, Arthur; Theillard, Maxime; Temprano-Coleto, Fernando; Gibou, Frédéric

    2017-11-01

    We present an approach to simulate the diffusion, advection and adsorption-desorption of a material quantity defined on an interface in two and three spatial dimensions. We use a level-set approach to capture the interface motion and a Quad/Octree data structure to efficiently solve the equations describing the underlying physics. Coupling with a Navier-Stokes solver enables the study of the effect of soluble surfactants that locally modify the parameters of surface tension on different types of flows. The method is tested on several benchmarks and applied to three typical examples of flows in the presence of surfactant: a bubble in a shear flow, the well-known phenomenon of tears of wine, and the Landau-Levich coating problem.

  20. A study on the application of topic models to motif finding algorithms.

    PubMed

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  1. The Capability Approach: A Critical Review of Its Application in Health Economics.

    PubMed

    Karimi, Milad; Brazier, John; Basarir, Hasan

    The capability approach is an approach to assessing well-being developed by Amartya Sen. Interest in this approach has resulted in several attempts to develop questionnaires to measure and value capability at an individual level in health economics. This commentary critically reviews the ability of these questionnaires to measure and value capability. It is argued that the method used in the questionnaires to measure capability will result in a capability set that is an inaccurate description of the individual's true capability set. The measured capability set will either represent only one combination and ignore the value of choice in the capability set, or represent one combination that is not actually achievable by the individual. In addition, existing methods of valuing capability may be inadequate because they do not consider that capability is a set. It may be practically more feasible to measure and value capability approximately rather than directly. Suggestions are made on how to measure and value an approximation to capability, but further research is required to implement the suggestions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Rule-governed Approaches to Physics--Newton's Third Law.

    ERIC Educational Resources Information Center

    Maloney, David P.

    1984-01-01

    Describes an approach to assessing the use of rules in solving problems related to Newton's third law of motion. Discusses the problems used, method of questioning, scoring of problem sets, and a general overview of the use of the technique in aiding the teacher in dealing with student's conceptual levels. (JM)

  3. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  4. A projection-free method for representing plane-wave DFT results in an atom-centered basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunnington, Benjamin D.; Schmidt, J. R., E-mail: schmidt@chem.wisc.edu

    2015-09-14

    Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strictmore » orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches.« less

  5. The categorical structure of semantic memory for famous people: a new approach using release from proactive interference.

    PubMed

    Darling, Stephen; Valentine, Tim

    2005-05-01

    Memory for familiar people is essential to understand their identity and guide social interaction. Nevertheless, we know surprisingly little about the structure of such memory. Previous research has assumed that semantic memory for people has a categorical structure, but recently it was proposed that memory for people consists only of associations and lacks any categorical structure. Four experiments are reported that use a novel approach by adapting the 'release from proactive interference' (RPI) methodology for use with lists of famous names. Proactive interference occurs when items presented on successive trials are drawn from the same category. Recall can improve following a change to a different category. Sets of names were selected relating to aspects previously demonstrated, on the basis of reaction time data, to form a category (occupation) and a property (nationality) of celebrities (Johnston & Bruce, 1990). RPI was observed for a change at both levels of representation but was only present without explicitly cueing the change of set when the stimuli differed at the category level. At the property level, RPI was only evident when change of set was explicitly cued. RPI was absent at the set change in a novel, ad hoc distinction suggesting that the effect reflected the underlying memory structure.

  6. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    PubMed Central

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  7. An integrated set of UNIX based system tools at control room level

    NASA Astrophysics Data System (ADS)

    Potepan, F.; Scafuri, C.; Bortolotto, C.; Surace, G.

    1994-12-01

    The design effort of providing a simple point-and-click approach to the equipment access has led to the definition and realization of a modular set of software tools to be used at the ELETTRA control room level. Point-to-point equipment access requires neither programming nor specific knowledge of the control system architecture. The development and integration of communication, graphic, editing and global database modules are described in depth, followed by a report of their use in the first commissioning period.

  8. Hybrid Wavelet De-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series

    NASA Astrophysics Data System (ADS)

    WANG, D.; Wang, Y.; Zeng, X.

    2017-12-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.

  9. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  10. A Statistical Approach to Establishing Subsystem Environmental Test Specifications

    NASA Technical Reports Server (NTRS)

    Keegan, W. B.

    1974-01-01

    Results are presented of a research task to evaluate structural responses at various subsystem mounting locations during spacecraft level test exposures to the environments of mechanical shock, acoustic noise, and random vibration. This statistical evaluation is presented in the form of recommended subsystem test specifications for these three environments as normalized to a reference set of spacecraft test levels and are thus suitable for extrapolation to a set of different spacecraft test levels. The recommendations are dependent upon a subsystem's mounting location in a spacecraft, and information is presented on how to determine this mounting zone for a given subsystem.

  11. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  12. Modeling relationships between landscape-level attributes and snorkel counts of chinook salmon and steelhead parr in Idaho

    Treesearch

    William L. Thompson; Danny C. Lee

    2000-01-01

    Knowledge of environmental factors impacting anadromous salmonids in their freshwater habitats, particularly at large spatial scales, may be important for restoring them to previously recorded levels in the northwestern United States. Consequently, we used existing data sets and an information-theoretic approach to model landscape-level attributes and snorkel count...

  13. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  14. Environmental Approaches to Prevention in College Settings

    PubMed Central

    Saltz, Robert F.

    2011-01-01

    Because of concerns regarding drinking among college students and its harmful consequences, numerous prevention efforts have been targeted to this population. These include individual-level and community-level interventions, as well as other measures (e.g., online approaches). Community-level interventions whose effects have been evaluated in college populations include programs that were developed for the community at large as well as programs aimed specifically at college students, such as A Matter of Degree, the Southwest DUI Enforcement Project, Neighborhoods Engaging With Students, the Study to Prevent Alcohol-Related Consequences, and Safer California Universities. Evaluations of these programs have found evidence of their effectiveness in reducing college drinking and related consequences. The most effective approaches to reducing alcohol consumption among college students likely will blend individual-, group-, campus-, and community-level prevention components. PMID:22330219

  15. Stabilised finite-element methods for solving the level set equation with mass conservation

    NASA Astrophysics Data System (ADS)

    Kabirou Touré, Mamadou; Fahsi, Adil; Soulaïmani, Azzeddine

    2016-01-01

    Finite-element methods are studied for solving moving interface flow problems using the level set approach and a stabilised variational formulation proposed in Touré and Soulaïmani (2012; Touré and Soulaïmani To appear in 2016), coupled with a level set correction method. The level set correction is intended to enhance the mass conservation satisfaction property. The stabilised variational formulation (Touré and Soulaïmani 2012; Touré and Soulaïmani, To appear in 2016) constrains the level set function to remain close to the signed distance function, while the mass conservation is a correction step which enforces the mass balance. The eXtended finite-element method (XFEM) is used to take into account the discontinuities of the properties within an element. XFEM is applied to solve the Navier-Stokes equations for two-phase flows. The numerical methods are numerically evaluated on several test cases such as time-reversed vortex flow, a rigid-body rotation of Zalesak's disc, sloshing flow in a tank, a dam-break over a bed, and a rising bubble subjected to buoyancy. The numerical results show the importance of satisfying global mass conservation to accurately capture the interface position.

  16. From politics to policy: a new payment approach in Medicare Advantage.

    PubMed

    Berenson, Robert A

    2008-01-01

    While the Medicare Advantage program's future remains contentious politically, the Medicare Payment Advisory Commission's (MedPAC's) recommended policy of financial neutrality at the local level between private plans and traditional Medicare ignores local market dynamics in important ways. An analysis correlating plan bids against traditional Medicare's local spending levels likely would provide an alternative method of setting benchmarks, by producing a blend of local and national rates. A result would be that the rural and lower-cost urban "floor counties" would have benchmarks below currently inflated levels but above what financial neutrality at the local level--MedPAC's approach--would produce.

  17. Application of supervised and unsupervised tools to direct effects-based monitoring efforts in the Great Lakes areas of concern: Maumee River, Ohio

    EPA Science Inventory

    Effects-based approaches that employ molecular and tissue level tools to detect and characterize biological responses to contaminants can be a useful complement to chemical monitoring approaches. When the source/type of contamination is known, a predetermined, or supervised, set...

  18. Approaches to Internet Searching: An Analysis of Student in Grades 2 to 12.

    ERIC Educational Resources Information Center

    Lien, Cynthia

    2000-01-01

    Examines Internet search approaches by 123 students, and analyzes search methodologies relative to search successes. Presents three findings: (1) student experience with the Internet is closely correlated with ability to explore alternative search methods; (2) student level; and (3) a collaborative work among students in a classroom setting may…

  19. A Model for the Strategic Use of Metacognitive Reading Comprehension Strategies

    ERIC Educational Resources Information Center

    Gómez González, Juan David

    2017-01-01

    This paper describes an approach to developing intermediate level reading proficiency through a strategic and iterative use of a discreet set of tasks that combine some of the more common metacognitive theories and strategies that have been published in the past thirty years. The case for incorporating this composite approach into reading…

  20. A Flexible Approach for the Statistical Visualization of Ensemble Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, K.; Wilson, A.; Bremer, P.

    2009-09-29

    Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less

  1. Identifying arbitrary parameter zonation using multiple level set functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  2. Identifying arbitrary parameter zonation using multiple level set functions

    DOE PAGES

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    2018-03-14

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  3. Identifying arbitrary parameter zonation using multiple level set functions

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan

    2018-07-01

    In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.

  4. Addressing HIV in the School Setting: Application of a School Change Model

    ERIC Educational Resources Information Center

    Walsh, Audra St. John; Chenneville, Tiffany

    2013-01-01

    This paper describes best practices for responding to youth with human immunodeficiency virus (HIV) in the school setting through the application of a school change model designed by the World Health Organization. This model applies a whole school approach and includes four levels that span the continuum from universal prevention to direct…

  5. The use of complete sets of orthogonal operators in spectroscopic studies

    NASA Astrophysics Data System (ADS)

    Raassen, A. J. J.; Uylings, P. H. M.

    1996-01-01

    Complete sets of orthogonal operators are used to calculate eigenvalues and eigenvector compositions in complex spectra. The latter are used to transform the LS-transition matrix into realistic intermediate coupling transition probabilities. Calculated transition probabilities for some close lying levels in Ni V and Fe III illustrate the power of the complete orthogonal operator approach.

  6. Preschool Teachers' Exposure to Classroom Noise

    ERIC Educational Resources Information Center

    Grebennikov, Leonid

    2006-01-01

    This research examined exposure to classroom noise of 25 full-time teaching staff in 14 preschool settings located across Western Sydney. The results indicated that one teacher exceeded the maximum permissible level of daily noise exposure for employees under the health and safety legislation. Three staff approached this level and 92% of teachers…

  7. Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction

    NASA Astrophysics Data System (ADS)

    Terlecki, Pawel; Walczak, Krzysztof

    This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.

  8. A software technology evaluation program

    NASA Technical Reports Server (NTRS)

    Novaes-Card, David N.

    1985-01-01

    A set of quantitative approaches is presented for evaluating software development methods and tools. The basic idea is to generate a set of goals which are refined into quantifiable questions which specify metrics to be collected on the software development and maintenance process and product. These metrics can be used to characterize, evaluate, predict, and motivate. They can be used in an active as well as passive way by learning form analyzing the data and improving the methods and tools based upon what is learned from that analysis. Several examples were given representing each of the different approaches to evaluation. The cost of the approaches varied inversely with the level of confidence in the interpretation of the results.

  9. A level set-based topology optimization method for simultaneous design of elastic structure and coupled acoustic cavity using a two-phase material model

    NASA Astrophysics Data System (ADS)

    Noguchi, Yuki; Yamamoto, Takashi; Yamada, Takayuki; Izui, Kazuhiro; Nishiwaki, Shinji

    2017-09-01

    This papers proposes a level set-based topology optimization method for the simultaneous design of acoustic and structural material distributions. In this study, we develop a two-phase material model that is a mixture of an elastic material and acoustic medium, to represent an elastic structure and an acoustic cavity by controlling a volume fraction parameter. In the proposed model, boundary conditions at the two-phase material boundaries are satisfied naturally, avoiding the need to express these boundaries explicitly. We formulate a topology optimization problem to minimize the sound pressure level using this two-phase material model and a level set-based method that obtains topologies free from grayscales. The topological derivative of the objective functional is approximately derived using a variational approach and the adjoint variable method and is utilized to update the level set function via a time evolutionary reaction-diffusion equation. Several numerical examples present optimal acoustic and structural topologies that minimize the sound pressure generated from a vibrating elastic structure.

  10. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  11. Nutrient intake values (NIVs): a recommended terminology and framework for the derivation of values.

    PubMed

    King, Janet C; Vorster, Hester H; Tome, Daniel G

    2007-03-01

    Although most countries and regions around the world set recommended nutrient intake values for their populations, there is no standardized terminology or framework for establishing these standards. Different terms used for various components of a set of dietary standards are described in this paper and a common set of terminology is proposed. The recommended terminology suggests that the set of values be called nutrient intake values (NIVs) and that the set be composed of three different values. The average nutrient requirement (ANR) reflects the median requirement for a nutrient in a specific population. The individual nutrient level (INLx) is the recommended level of nutrient intake for all healthy people in the population, which is set at a certain level x above the mean requirement. For example, a value set at 2 standard deviations above the mean requirement would cover the needs of 98% of the population and would be INL98. The third component of the NIVs is an upper nutrient level (UNL), which is the highest level of daily nutrient intake that is likely to pose no risk of adverse health effects for almost all individuals in a specified life-stage group. The proposed framework for deriving a set of NIVs is based on a statistical approach for determining the midpoint of a distribution of requirements for a set of nutrients in a population (the ANR), the standard deviation of the requirements, and an individual nutrient level that assures health at some point above the mean, e.g., 2 standard deviations. Ideally, a second set of distributions of risk of excessive intakes is used as the basis for a UNL.

  12. Action recognition using mined hierarchical compound features.

    PubMed

    Gilbert, Andrew; Illingworth, John; Bowden, Richard

    2011-05-01

    The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical approach outperforms all other methods reported thus far in the literature and can achieve real-time operation.

  13. Genetic programming approach to evaluate complexity of texture images

    NASA Astrophysics Data System (ADS)

    Ciocca, Gianluigi; Corchs, Silvia; Gasparini, Francesca

    2016-11-01

    We adopt genetic programming (GP) to define a measure that can predict complexity perception of texture images. We perform psychophysical experiments on three different datasets to collect data on the perceived complexity. The subjective data are used for training, validation, and test of the proposed measure. These data are also used to evaluate several possible candidate measures of texture complexity related to both low level and high level image features. We select four of them (namely roughness, number of regions, chroma variance, and memorability) to be combined in a GP framework. This approach allows a nonlinear combination of the measures and could give hints on how the related image features interact in complexity perception. The proposed complexity measure M exhibits Pearson correlation coefficients of 0.890 on the training set, 0.728 on the validation set, and 0.724 on the test set. M outperforms each of all the single measures considered. From the statistical analysis of different GP candidate solutions, we found that the roughness measure evaluated on the gray level image is the most dominant one, followed by the memorability, the number of regions, and finally the chroma variance.

  14. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    NASA Astrophysics Data System (ADS)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  15. Muscular dystrophy summer camp: a case study of a non-traditional level I fieldwork using a collaborative supervision model.

    PubMed

    Provident, Ingrid M; Colmer, Maria A

    2013-01-01

    A shortage of traditional medical fieldwork placements has been reported in the United States. Alternative settings are being sought to meet the Accreditation Standards for Level I fieldwork. This study was designed to examine and report the outcomes of an alternative pediatric camp setting, using a group model of supervision to fulfill the requirements for Level I fieldwork. Thirty-seven students from two Pennsylvania OT schools. Two cohorts of students were studied over a two year period using multiple methods of retrospective review and data collection. Students supervised in a group model experienced positive outcomes, including opportunities to deliver client centered care, and understanding the role of caregiving for children with disabilities. The use of a collaborative model of fieldwork education at a camp setting has resulted in a viable approach for the successful attainment of Level I fieldwork objectives for multiple students under a single supervisor.

  16. Bayesian Model Development for Analysis of Open Source Information to Support the Assessment of Nuclear Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastelum, Zoe N.; Whitney, Paul D.; White, Amanda M.

    2013-07-15

    Pacific Northwest National Laboratory has spent several years researching, developing, and validating large Bayesian network models to support integration of open source data sets for nuclear proliferation research. Our current work focuses on generating a set of interrelated models for multi-source assessment of nuclear programs, as opposed to a single comprehensive model. By using this approach, we can break down the models to cover logical sub-problems that can utilize different expertise and data sources. This approach allows researchers to utilize the models individually or in combination to detect and characterize a nuclear program and identify data gaps. The models operatemore » at various levels of granularity, covering a combination of state-level assessments with more detailed models of site or facility characteristics. This paper will describe the current open source-driven, nuclear nonproliferation models under development, the pros and cons of the analytical approach, and areas for additional research.« less

  17. Spatial clustering of average risks and risk trends in Bayesian disease mapping.

    PubMed

    Anderson, Craig; Lee, Duncan; Dean, Nema

    2017-01-01

    Spatiotemporal disease mapping focuses on estimating the spatial pattern in disease risk across a set of nonoverlapping areal units over a fixed period of time. The key aim of such research is to identify areas that have a high average level of disease risk or where disease risk is increasing over time, thus allowing public health interventions to be focused on these areas. Such aims are well suited to the statistical approach of clustering, and while much research has been done in this area in a purely spatial setting, only a handful of approaches have focused on spatiotemporal clustering of disease risk. Therefore, this paper outlines a new modeling approach for clustering spatiotemporal disease risk data, by clustering areas based on both their mean risk levels and the behavior of their temporal trends. The efficacy of the methodology is established by a simulation study, and is illustrated by a study of respiratory disease risk in Glasgow, Scotland. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A Practical Guide for the Formulation of Propositions in the Bayesian Approach to DNA Evidence Interpretation in an Adversarial Environment.

    PubMed

    Gittelson, Simone; Kalafut, Tim; Myers, Steven; Taylor, Duncan; Hicks, Tacha; Taroni, Franco; Evett, Ian W; Bright, Jo-Anne; Buckleton, John

    2016-01-01

    The interpretation of complex DNA profiles is facilitated by a Bayesian approach. This approach requires the development of a pair of propositions: one aligned to the prosecution case and one to the defense case. This note explores the issue of proposition setting in an adversarial environment by a series of examples. A set of guidelines generalize how to formulate propositions when there is a single person of interest and when there are multiple individuals of interest. Additional explanations cover how to handle multiple defense propositions, relatives, and the transition from subsource level to activity level propositions. The propositions depend on case information and the allegations of each of the parties. The prosecution proposition is usually known. The authors suggest that a sensible proposition is selected for the defense that is consistent with their stance, if available, and consistent with a realistic defense if their position is not known. © 2015 American Academy of Forensic Sciences.

  19. Cost-effective priorities for global mammal conservation.

    PubMed

    Carwardine, Josie; Wilson, Kerrie A; Ceballos, Gerardo; Ehrlich, Paul R; Naidoo, Robin; Iwamura, Takuya; Hajkowicz, Stefan A; Possingham, Hugh P

    2008-08-12

    Global biodiversity priority setting underpins the strategic allocation of conservation funds. In identifying the first comprehensive set of global priority areas for mammals, Ceballos et al. [Ceballos G, Ehrlich PR, Soberón J, Salazar I, Fay JP (2005) Science 309:603-607] found much potential for conflict between conservation and agricultural human activity. This is not surprising because, like other global priority-setting approaches, they set priorities without socioeconomic objectives. Here we present a priority-setting framework that seeks to minimize the conflicts and opportunity costs of meeting conservation goals. We use it to derive a new set of priority areas for investment in mammal conservation based on (i) agricultural opportunity cost and biodiversity importance, (ii) current levels of international funding, and (iii) degree of threat. Our approach achieves the same biodiversity outcomes as Ceballos et al.'s while reducing the opportunity costs and conflicts with agricultural human activity by up to 50%. We uncover shortfalls in the allocation of conservation funds in many threatened priority areas, highlighting a global conservation challenge.

  20. Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2018-01-01

    Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. How have systematic priority setting approaches influenced policy making? A synthesis of the current literature.

    PubMed

    Kapiriri, Lydia; Razavi, Donya

    2017-09-01

    There is a growing body of literature on systematic approaches to healthcare priority setting from various countries and different levels of decision making. This paper synthesizes the current literature in order to assess the extent to which program budgeting and marginal analysis (PBMA), burden of disease & cost-effectiveness analysis (BOD/CEA), multi-criteria decision analysis (MCDA), and accountability for reasonableness (A4R), are reported to have been institutionalized and influenced policy making and practice. We searched for English language publications on health care priority setting approaches (2000-2017). Our sources of literature included PubMed and Ovid databases (including Embase, Global Health, Medline, PsycINFO, EconLit). Of the four approaches PBMA and A4R were commonly applied in high income countries while BOD/CEA was exclusively applied in low income countries. PBMA and BOD/CEA were most commonly reported to have influenced policy making. The explanations for limited adoption of an approach were related to its complexity, poor policy maker understanding and resource requirements. While systematic approaches have the potential to improve healthcare priority setting; most have not been adopted in routine policy making. The identified barriers call for sustained knowledge exchange between researchers and policy-makers and development of practical guidelines to ensure that these frameworks are more accessible, applicable and sustainable in informing policy making. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Implementing a Public Health Approach to Addressing Mental Health Needs in a University Setting: Lessons and Challenges

    ERIC Educational Resources Information Center

    Parcover, Jason; Mays, Sally; McCarthy, Amy

    2015-01-01

    The mental health needs of college students are placing increasing demands on counseling center resources, and traditional outreach efforts may be outdated or incomplete. The public health model provides an approach for reaching more students, decreasing stigma, and addressing mental health concerns before they reach crisis levels. Implementing a…

  3. Investigating the Effects of an Experimental Approach to Comprehension Instruction within a Literacy Clinic

    ERIC Educational Resources Information Center

    Ortlieb, Evan; McDowell, F. D.

    2015-01-01

    Reading comprehension levels of elementary students have not significantly improved in the 21st century and as a result, the need for systematic and intensive reading interventions is as high as ever. Literacy clinics are an ideal setting for struggling readers to experience success through the implementation of a cyclical approach to individual…

  4. Rational Density Functional Selection Using Game Theory.

    PubMed

    McAnanama-Brereton, Suzanne; Waller, Mark P

    2018-01-22

    Theoretical chemistry has a paradox of choice due to the availability of a myriad of density functionals and basis sets. Traditionally, a particular density functional is chosen on the basis of the level of user expertise (i.e., subjective experiences). Herein we circumvent the user-centric selection procedure by describing a novel approach for objectively selecting a particular functional for a given application. We achieve this by employing game theory to identify optimal functional/basis set combinations. A three-player (accuracy, complexity, and similarity) game is devised, through which Nash equilibrium solutions can be obtained. This approach has the advantage that results can be systematically improved by enlarging the underlying knowledge base, and the deterministic selection procedure mathematically justifies the density functional and basis set selections.

  5. CrIS/ATMS Retrievals Using the Latest AIRS/AMSU Retrieval Methodology

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis C.; Blaisdell, John; Iredell, Lena

    2015-01-01

    This research is being done under the NPP Science Team Proposal: Analysis of CrISATMS Using an AIRS Version 6-like Retrieval Algorithm Objective: Generate a long term CrISATMS level-3 data set that is consistent with that of AIRSAMSU Approach: Adapt the currently operational AIRS Science Team Version-6 Retrieval Algorithm, or an improved version of it, for use with CrISATMS data. Metric: Generate monthly mean level-3 CrISATMS climate data sets and evaluate the results by comparison of monthly mean AIRSAMSU and CrISATMS products, and more significantly, their inter-annual differences and, eventually, anomaly time series. The goal is consistency between the AIRSAMSU and CrISATMS climate data sets.

  6. Designs for Evaluating the Community-Level Impact of Comprehensive Prevention Programs: Examples from the CDC Centers of Excellence in Youth Violence Prevention.

    PubMed

    Farrell, Albert D; Henry, David; Bradshaw, Catherine; Reischl, Thomas

    2016-04-01

    This article discusses the opportunities and challenges of developing research designs to evaluate the impact of community-level prevention efforts. To illustrate examples of evaluation designs, we describe six projects funded by the Centers for Disease Control and Prevention to evaluate multifaceted approaches to reduce youth violence in high-risk communities. Each of these projects was designed to evaluate the community-level impact of multiple intervention strategies to address individual and contextual factors that place youth at risk for violent behavior. Communities differed across projects in their setting, size, and how their boundaries were defined. Each project is using multiple approaches to compare outcomes in one or more intervention communities to those in comparison communities. Five of the projects are using comparative interrupted time-series designs to compare outcomes in an intervention community to matched comparison communities. A sixth project is using a multiple baseline design in which the order and timing of intervention activities is randomized across three communities. All six projects are also using regression point displacement designs to compare outcomes within intervention communities to those within broader sets of similar communities. Projects are using a variety of approaches to assess outcomes including archival records, surveys, and direct observations. We discuss the strengths and weaknesses of the designs of these projects and illustrate the challenges of designing high-quality evaluations of comprehensive prevention approaches implemented at the community level.

  7. Relative and Geocentric Sea Level Rise Along the U.S. West Coast

    NASA Astrophysics Data System (ADS)

    Burgette, R. J.; Watson, C. S.

    2015-12-01

    The rate of sea level change relative to the land along the West Coast of the U.S. varies over a range of +5 to -2 mm/yr, as observed across the set of long-running tide gauges. We analyze tide gauge data in a network approach that accounts for temporal and spatial correlations in the time series of water levels observed at the stations. This analysis yields a set of rate estimates and realistic uncertainties that are minimally affected by varying durations of observations. The analysis has the greatest impact for tide gauges with short records, as the adjusted rate uncertainties for 2 to 3 decade duration tide gauges approach those estimated from unadjusted century-scale time series. We explore the sources of the wide range of observed relative sea level rates through comparison with: 1) estimated vertical deformation rates derived from repeated leveling and GPS, 2) relative sea level change predicted from models of glacial isostatic adjustment, and 3) geocentric sea level rates estimated from satellite altimetry and century-scale reconstructions. Tectonic deformation is the dominant signal in the relative sea level rates along the Cascadia portion of the coast, and is consistent with along-strike variation in locking behavior on the plate interface. Rates of vertical motion are lower along the transform portion of the plate boundary and include anthropogenic effects, but there are significant tectonic signals, particularly in the western Transverse Ranges of California where the crust is shortening across reverse faults. Preliminary analysis of different strategies of estimating the magnitude of geocentric sea level rise suggest significant discrepancies between approaches. We will examine the implications of these discrepancies for understanding the process of regional geocentric sea level rise in the northeastern Pacific Ocean, and associated projected impacts.

  8. Computation of an ESD-induced E-field Envirnoment and Definition of a Current Injector Test set-up at Equipment Level

    NASA Astrophysics Data System (ADS)

    Marque, J. P.; Issac, F.; Parmantier, J. P.; Bertuol, S.

    1998-11-01

    The ESD-induced electromagnetic field on a S/C is computed using the 3D PIC code GEODE. Typical E-field waveforms are deduced and a new susceptibility test at equipment level is proposed as an alternative to the plane wave approach.

  9. Affective Management Strategies for Behavior Disordered Students--Elementary and Secondary Levels.

    ERIC Educational Resources Information Center

    Burkholder, Lynn D.; And Others

    The Positive Education Program (PEP) in Cuyahoga, Ohio, incorporates a token economy and group process approach into the daily school routine for emotionally disturbed and behaviorally handicapped students. At elementary and secondary levels, minimum rules and expectations as well as privileges awarded for behaviors are clearly set forth. The…

  10. Experiments on automatic classification of tissue malignancy in the field of digital pathology

    NASA Astrophysics Data System (ADS)

    Pereira, J.; Barata, R.; Furtado, Pedro

    2017-06-01

    Automated analysis of histological images helps diagnose and further classify breast cancer. Totally automated approaches can be used to pinpoint images for further analysis by the medical doctor. But tissue images are especially challenging for either manual or automated approaches, due to mixed patterns and textures, where malignant regions are sometimes difficult to detect unless they are in very advanced stages. Some of the major challenges are related to irregular and very diffuse patterns, as well as difficulty to define winning features and classifier models. Although it is also hard to segment correctly into regions, due to the diffuse nature, it is still crucial to take low-level features over individualized regions instead of the whole image, and to select those with the best outcomes. In this paper we report on our experiments building a region classifier with a simple subspace division and a feature selection model that improves results over image-wide and/or limited feature sets. Experimental results show modest accuracy for a set of classifiers applied over the whole image, while the conjunction of image division, per-region low-level extraction of features and selection of features, together with the use of a neural network classifier achieved the best levels of accuracy for the dataset and settings we used in the experiments. Future work involves deep learning techniques, adding structures semantics and embedding the approach as a tumor finding helper in a practical Medical Imaging Application.

  11. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  12. Transdisciplinary Research on Cancer-Healing Systems Between Biomedicine and the Maya of Guatemala: A Tool for Reciprocal Reflexivity in a Multi-Epistemological Setting.

    PubMed

    Berger-González, Mónica; Stauffacher, Michael; Zinsstag, Jakob; Edwards, Peter; Krütli, Pius

    2016-01-01

    Transdisciplinarity (TD) is a participatory research approach in which actors from science and society work closely together. It offers means for promoting knowledge integration and finding solutions to complex societal problems, and can be applied within a multiplicity of epistemic systems. We conducted a TD process from 2011 to 2014 between indigenous Mayan medical specialists from Guatemala and Western biomedical physicians and scientists to study cancer. Given the immense cultural gap between the partners, it was necessary to develop new methods to overcome biases induced by ethnocentric behaviors and power differentials. This article describes this intercultural cooperation and presents a method of reciprocal reflexivity (Bidirectional Emic-Etic tool) developed to overcome them. As a result of application, researchers observed successful knowledge integration at the epistemic level, the social-organizational level, and the communicative level throughout the study. This approach may prove beneficial to others engaged in facilitating participatory health research in complex intercultural settings. © The Author(s) 2015.

  13. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.

  14. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  15. The Development of Approaches to Learning and Perceptions of the Teaching-Learning Environment during Bachelor Level Studies and Their Relation to Study Success

    ERIC Educational Resources Information Center

    Asikainen, Henna; Parpala, Anna; Lindblom-Ylänne, Sari; Vanthournout, Gert; Coertjens, Liesje

    2014-01-01

    The aim of the present study is to explore changes both in approaches to learning as well as in students' experiences of the teaching-learning environment and how these changes are related to each other during their Bachelor studies by using a longitudinal data set. The aim is further to explore how students' approaches to learning and their…

  16. Overcoming Barriers in Unhealthy Settings

    PubMed Central

    Lemke, Michael K.; Meissen, Gregory J.; Apostolopoulos, Yorghos

    2016-01-01

    We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD) paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes. PMID:28462332

  17. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    PubMed

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.

    PubMed

    Ciofolo, Cybèle; Barillot, Christian

    2009-06-01

    We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.

  19. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  20. Developing an evidence-based approach to Public Health Nutrition: translating evidence into policy.

    PubMed

    Margetts, B; Warm, D; Yngve, A; Sjöström, M

    2001-12-01

    The aim of this paper is to highlight the importance of an evidence-based approach to the development, implementation and evaluation of policies aimed at improving nutrition-related health in the population. Public Health Nutrition was established to realise a population-level approach to the prevention of the major nutrition-related health problems world-wide. The scope is broad and integrates activity from local, national, regional and international levels. The aim is to inform and develop coherent and effective policies that address the key rate-limiting steps critical to improving nutrition-related public health. This paper sets out the rationale for an evidence-based approach to Public Health Nutrition developed under the umbrella of the European Network for Public Health Nutrition.

  1. Mixed method evaluation of a community-based physical activity program using the RE-AIM framework: practical application in a real-world setting.

    PubMed

    Koorts, Harriet; Gillison, Fiona

    2015-11-06

    Communities are a pivotal setting in which to promote increases in child and adolescent physical activity behaviours. Interventions implemented in these settings require effective evaluation to facilitate translation of findings to wider settings. The aims of this paper are to i) present findings from a RE-AIM evaluation of a community-based physical activity program, and ii) review the methodological challenges faced when applying RE-AIM in practice. A single mixed-methods case study was conducted based on a concurrent triangulation design. Five sources of data were collected via interviews, questionnaires, archival records, documentation and field notes. Evidence was triangulated within RE-AIM to assess individual and organisational-level program outcomes. Inconsistent availability of data and a lack of robust reporting challenged assessment of all five dimensions. Reach, Implementation and setting-level Adoption were less successful, Effectiveness and Maintenance at an individual and organisational level were moderately successful. Only community-level Adoption was highly successful, reflecting the key program goal to provide community-wide participation in sport and physical activity. This research highlighted important methodological constraints associated with the use of RE-AIM in practice settings. Future evaluators wishing to use RE-AIM may benefit from a mixed-method triangulation approach to offset challenges with data availability and reliability.

  2. Resource Letter MPCVW-1: Modeling Political Conflict, Violence, and Wars: A Survey

    NASA Astrophysics Data System (ADS)

    Morgenstern, Ana P.; Velásquez, Nicolás; Manrique, Pedro; Hong, Qi; Johnson, Nicholas; Johnson, Neil

    2013-11-01

    This Resource Letter provides a guide into the literature on modeling and explaining political conflict, violence, and wars. Although this literature is dominated by social scientists, multidisciplinary work is currently being developed in the wake of myriad methodological approaches that have sought to analyze and predict political violence. The works covered herein present an overview of this abundance of methodological approaches. Since there is a variety of possible data sets and theoretical approaches, the level of detail and scope of models can vary quite considerably. The review does not provide a summary of the available data sets, but instead highlights recent works on quantitative or multi-method approaches to modeling different forms of political violence. Journal articles and books are organized in the following topics: social movements, diffusion of social movements, political violence, insurgencies and terrorism, and civil wars.

  3. Simultaneous isoform discovery and quantification from RNA-seq.

    PubMed

    Hiller, David; Wong, Wing Hung

    2013-05-01

    RNA sequencing is a recent technology which has seen an explosion of methods addressing all levels of analysis, from read mapping to transcript assembly to differential expression modeling. In particular the discovery of isoforms at the transcript assembly stage is a complex problem and current approaches suffer from various limitations. For instance, many approaches use graphs to construct a minimal set of isoforms which covers the observed reads, then perform a separate algorithm to quantify the isoforms, which can result in a loss of power. Current methods also use ad-hoc solutions to deal with the vast number of possible isoforms which can be constructed from a given set of reads. Finally, while the need of taking into account features such as read pairing and sampling rate of reads has been acknowledged, most existing methods do not seamlessly integrate these features as part of the model. We present Montebello, an integrated statistical approach which performs simultaneous isoform discovery and quantification by using a Monte Carlo simulation to find the most likely isoform composition leading to a set of observed reads. We compare Montebello to Cufflinks, a popular isoform discovery approach, on a simulated data set and on 46.3 million brain reads from an Illumina tissue panel. On this data set Montebello appears to offer a modest improvement over Cufflinks when considering discovery and parsimony metrics. In addition Montebello mitigates specific difficulties inherent in the Cufflinks approach. Finally, Montebello can be fine-tuned depending on the type of solution desired.

  4. Handwritten word preprocessing for database adaptation

    NASA Astrophysics Data System (ADS)

    Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic

    2013-01-01

    Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.

  5. Space Missions Trade Space Generation and Assessment Using JPL Rapid Mission Architecture (RMA) Team Approach

    NASA Technical Reports Server (NTRS)

    Moeller, Robert C.; Borden, Chester; Spilker, Thomas; Smythe, William; Lock, Robert

    2011-01-01

    The JPL Rapid Mission Architecture (RMA) capability is a novel collaborative team-based approach to generate new mission architectures, explore broad trade space options, and conduct architecture-level analyses. RMA studies address feasibility and identify best candidates to proceed to further detailed design studies. Development of RMA first began at JPL in 2007 and has evolved to address the need for rapid, effective early mission architectural development and trade space exploration as a precursor to traditional point design evaluations. The RMA approach integrates a small team of architecture-level experts (typically 6-10 people) to generate and explore a wide-ranging trade space of mission architectures driven by the mission science (or technology) objectives. Group brainstorming and trade space analyses are conducted at a higher level of assessment across multiple mission architectures and systems to enable rapid assessment of a set of diverse, innovative concepts. This paper describes the overall JPL RMA team, process, and high-level approach. Some illustrative results from previous JPL RMA studies are discussed.

  6. Synergy for health equity: integrating health promotion and social determinants of health approaches in and beyond the Americas.

    PubMed

    Jackson, Suzanne F; Birn, Anne-Emanuelle; Fawcett, Stephen B; Poland, Blake; Schultz, Jerry A

    2013-12-01

    Health promotion and social determinants of health approaches, when integrated, can better contribute to understanding and addressing health inequities. Yet, they have typically been pursued as two solitudes. This paper presents the key elements, principles, actions, and potential synergies of these complementary frameworks for addressing health equity. The value-added of integrating these two approaches is illustrated by three examples drawn from the authors' experiences in the Americas: at the community level, through a community-based coalition for reducing chronic disease disparities among minorities in an urban center in the United States; at the national level, through healthy-settings interventions in Canada; and at the Regional level, through health cooperation based on social justice values in Latin America. Challenges to integrating health promotion and social determinants of health approaches in the Americas are also discussed.

  7. Level-Set Variational Implicit-Solvent Modeling of Biomolecules with the Coulomb-Field Approximation

    PubMed Central

    2011-01-01

    Central in the variational implicit-solvent model (VISM) [Dzubiella, Swanson, and McCammon Phys. Rev. Lett.2006, 96, 087802 and J. Chem. Phys.2006, 124, 084905] of molecular solvation is a mean-field free-energy functional of all possible solute–solvent interfaces or dielectric boundaries. Such a functional can be minimized numerically by a level-set method to determine stable equilibrium conformations and solvation free energies. Applications to nonpolar systems have shown that the level-set VISM is efficient and leads to qualitatively and often quantitatively correct results. In particular, it is capable of capturing capillary evaporation in hydrophobic confinement and corresponding multiple equilibrium states as found in molecular dynamics (MD) simulations. In this work, we introduce into the VISM the Coulomb-field approximation of the electrostatic free energy. Such an approximation is a volume integral over an arbitrary shaped solvent region, requiring no solutions to any partial differential equations. With this approximation, we obtain the effective boundary force and use it as the “normal velocity” in the level-set relaxation. We test the new approach by calculating solvation free energies and potentials of mean force for small and large molecules, including the two-domain protein BphC. Our results reveal the importance of coupling polar and nonpolar interactions in the underlying molecular systems. In particular, dehydration near the domain interface of BphC subunits is found to be highly sensitive to local electrostatic potentials as seen in previous MD simulations. This is a first step toward capturing the complex protein dehydration process by an implicit-solvent approach. PMID:22346739

  8. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  9. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    NASA Astrophysics Data System (ADS)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to compare the hydrodynamic forces and the attitudes assumed at different velocities. A very good agreement between numerical and experimental results demonstrates the reliability of the single-phase level set approach for the predictions of high Froude numbers flows.

  10. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  11. A three-way approach for protein function classification

    PubMed Central

    2017-01-01

    The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929

  12. A three-way approach for protein function classification.

    PubMed

    Ur Rehman, Hafeez; Azam, Nouman; Yao, JingTao; Benso, Alfredo

    2017-01-01

    The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy.

  13. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    PubMed Central

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process and outcome measures during priority setting and resource allocation. PMID:29511741

  14. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  15. A Simulated Peer-Assessment Approach to Improving Student Performance in Chemical Calculations

    ERIC Educational Resources Information Center

    Scott, Fraser J.

    2014-01-01

    This paper describes the utility of using simulated, rather than real, student solutions to problems within a peer-assessment setting and whether this approach can be used as a means of improving performance in chemical calculations. The study involved a small cohort of students, of two levels, who carried out a simulated peer-assessment as a…

  16. Predicting climate-induced range shifts: model differences and model reliability.

    Treesearch

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  17. Combining Event- and Variable-Centred Approaches to Institution-Facing Learning Analytics at the Unit of Study Level

    ERIC Educational Resources Information Center

    Kelly, Nick; Montenegro, Maximiliano; Gonzalez, Carlos; Clasing, Paula; Sandoval, Augusto; Jara, Magdalena; Saurina, Elvira; Alarcón, Rosa

    2017-01-01

    Purpose: The purpose of this paper is to demonstrate the utility of combining event-centred and variable-centred approaches when analysing big data for higher education institutions. It uses a large, university-wide data set to demonstrate the methodology for this analysis by using the case study method. It presents empirical findings about…

  18. Whole abdominal wall segmentation using augmented active shape models (AASM) with multi-atlas label fusion and level set

    NASA Astrophysics Data System (ADS)

    Xu, Zhoubing; Baucom, Rebeccah B.; Abramson, Richard G.; Poulose, Benjamin K.; Landman, Bennett A.

    2016-03-01

    The abdominal wall is an important structure differentiating subcutaneous and visceral compartments and intimately involved with maintaining abdominal structure. Segmentation of the whole abdominal wall on routinely acquired computed tomography (CT) scans remains challenging due to variations and complexities of the wall and surrounding tissues. In this study, we propose a slice-wise augmented active shape model (AASM) approach to robustly segment both the outer and inner surfaces of the abdominal wall. Multi-atlas label fusion (MALF) and level set (LS) techniques are integrated into the traditional ASM framework. The AASM approach globally optimizes the landmark updates in the presence of complicated underlying local anatomical contexts. The proposed approach was validated on 184 axial slices of 20 CT scans. The Hausdorff distance against the manual segmentation was significantly reduced using proposed approach compared to that using ASM, MALF, and LS individually. Our segmentation of the whole abdominal wall enables the subcutaneous and visceral fat measurement, with high correlation to the measurement derived from manual segmentation. This study presents the first generic algorithm that combines ASM, MALF, and LS, and demonstrates practical application for automatically capturing visceral and subcutaneous fat volumes.

  19. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2017-06-01

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Application of short-data methods on extreme surge levels

    NASA Astrophysics Data System (ADS)

    Feng, X.

    2014-12-01

    Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.

  1. Fast and robust group-wise eQTL mapping using sparse graphical models.

    PubMed

    Cheng, Wei; Shi, Yu; Zhang, Xiang; Wang, Wei

    2015-01-16

    Genome-wide expression quantitative trait loci (eQTL) studies have emerged as a powerful tool to understand the genetic basis of gene expression and complex traits. The traditional eQTL methods focus on testing the associations between individual single-nucleotide polymorphisms (SNPs) and gene expression traits. A major drawback of this approach is that it cannot model the joint effect of a set of SNPs on a set of genes, which may correspond to hidden biological pathways. We introduce a new approach to identify novel group-wise associations between sets of SNPs and sets of genes. Such associations are captured by hidden variables connecting SNPs and genes. Our model is a linear-Gaussian model and uses two types of hidden variables. One captures the set associations between SNPs and genes, and the other captures confounders. We develop an efficient optimization procedure which makes this approach suitable for large scale studies. Extensive experimental evaluations on both simulated and real datasets demonstrate that the proposed methods can effectively capture both individual and group-wise signals that cannot be identified by the state-of-the-art eQTL mapping methods. Considering group-wise associations significantly improves the accuracy of eQTL mapping, and the successful multi-layer regression model opens a new approach to understand how multiple SNPs interact with each other to jointly affect the expression level of a group of genes.

  2. Setting and meeting priorities in Indigenous health research in Australia and its application in the Cooperative Research Centre for Aboriginal health.

    PubMed

    Monk, Johanna M; Rowley, Kevin G; Anderson, Ian Ps

    2009-11-20

    Priority setting is about making decisions. Key issues faced during priority setting processes include identifying who makes these decisions, who sets the criteria, and who benefits. The paper reviews the literature and history around priority setting in research, particularly in Aboriginal health research. We explore these issues through a case study of the Cooperative Research Centre for Aboriginal Health (CRCAH)'s experience in setting and meeting priorities.Historically, researchers have made decisions about what research gets done. Pressures of growing competition for research funds and an increased public interest in research have led to demands that appropriate consultation with stakeholders is conducted and that research is of benefit to the wider society. Within Australian Aboriginal communities, these demands extend to Aboriginal control of research to ensure that Aboriginal priorities are met.In response to these demands, research priorities are usually agreed in consultation with stakeholders at an institutional level and researchers are asked to develop relevant proposals at a project level. The CRCAH's experience in funding rounds was that scientific merit was given more weight than stakeholders' priorities and did not necessarily result in research that met these priorities. After reviewing these processes in 2004, the CRCAH identified a new facilitated development approach. In this revised approach, the setting of institutional priorities is integrated with the development of projects in a way that ensures the research reflects stakeholder priorities.This process puts emphasis on identifying projects that reflect priorities prior to developing the quality of the research, rather than assessing the relevance to priorities and quality concurrently. Part of the CRCAH approach is the employment of Program Managers who ensure that stakeholder priorities are met in the development of research projects. This has enabled researchers and stakeholders to come together to collaboratively develop priority-driven research. Involvement by both groups in project development has been found to be essential in making decisions that will lead to robust and useful research.

  3. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    PubMed

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  4. Toxicological approach to setting spacecraft maximum allowable concentrations for carbon monoxide

    NASA Technical Reports Server (NTRS)

    Wong, K. L.; Limero, T. F.; James, J. T.

    1992-01-01

    The Spacecraft Maximum Allowable Concentrations (SMACs) are exposure limits for airborne chemicals used by NASA in spacecraft. The aim of these SMACs is to protect the spacecrew against adverse health effects and performance decrements that would interfere with mission objectives. Because of the 1 and 24 hr SMACs are set for contingencies, minor reversible toxic effects that do not affect mission objectives are acceptable. The 7, 30, or 180 day SMACs are aimed at nominal operations, so they are established at levels that would not cause noncarcinogenic toxic effects and more than one case of tumor per 1000 exposed individuals over the background. The process used to set the SMACs for carbon monoxide (CO) is described to illustrate the approach used by NASA. After the toxicological literature on CO was reviewed, the data were summarized and separated into acute, subchronic, and chronic toxicity data. CO's toxicity depends on the formation of carboxyhemoglobin (COHb) in the blood, reducing the blood's oxygen carrying capacity. The initial task was to estimate the COHb levels that would not produce toxic effects in the brain and heart.

  5. Expanding Opportunity through Critical Restorative Justice Portraits of Resilience at the Individual and School Level

    ERIC Educational Resources Information Center

    Knight, David; Wadhwa, Anita

    2014-01-01

    In this article, we tackle the disadvantaging conditions of zero tolerance policies in school settings and advocate using an alternative approach--critical restorative justice through peacemaking circles--to nurture resilience and open opportunity at the school level. In the process, this article builds on theory and qualitative research and…

  6. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    PubMed Central

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. PMID:26673332

  7. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    PubMed

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015 by Kerman University of Medical Sciences.

  8. A pharmacology guided approach for setting limits on product-related impurities for bispecific antibody manufacturing.

    PubMed

    Rajan, Sharmila; Sonoda, Junichiro; Tully, Timothy; Williams, Ambrose J; Yang, Feng; Macchi, Frank; Hudson, Terry; Chen, Mark Z; Liu, Shannon; Valle, Nicole; Cowan, Kyra; Gelzleichter, Thomas

    2018-04-13

    bFKB1 is a humanized bispecific IgG1 antibody, created by conjoining an anti-Fibroblast Growth Factor Receptor 1 (FGFR1) half-antibody to an anti-Klothoβ (KLB) half-antibody, using the knobs-into-holes strategy. bFKB1 acts as a highly selective agonist for the FGFR1/KLB receptor complex and is intended to ameliorate obesity-associated metabolic defects by mimicking the activity of the hormone FGF21. An important aspect of the biologics product manufacturing process is to establish meaningful product specifications regarding the tolerable levels of impurities that copurify with the drug product. The aim of the current study was to determine acceptable levels of product-related impurities for bFKB1. To determine the tolerable levels of these impurities, we dosed obese mice with bFKB1 enriched with various levels of either HMW impurities or anti-FGFR1-related impurities, and measured biomarkers for KLB-independent FGFR1 signaling. Here, we show that product-related impurities of bFKB1, in particular, high molecular weight (HMW) impurities and anti-FGFR1-related impurities, when purposefully enriched, stimulate FGFR1 in a KLB-independent manner. By taking this approach, the tolerable levels of product-related impurities were successfully determined. Our study demonstrates a general pharmacology-guided approach to setting a product specification for a bispecific antibody whose homomultimer-related impurities could lead to undesired biological effects. Copyright © 2018. Published by Elsevier Inc.

  9. The evolution of PBMA: towards a macro-level priority setting framework for health regions.

    PubMed

    Mitton, Craig R; Donaldson, Cam; Waldner, Howard; Eagle, Chris

    2003-11-01

    To date, relatively little work on priority setting has been carried out at a macro-level across major portfolios within integrated health care organizations. This paper describes a macro marginal analysis (MMA) process for setting priorities and allocating resources in health authorities, based on work carried out in a major urban health region in Alberta, Canada. MMA centers around an expert working group of managers and clinicians who are charged with identifying areas for resource re-allocation on an ongoing basis. Trade-offs between services are based on locally defined criteria and are informed by multiple inputs such as evidence from the literature and local expert opinion. The approach is put forth as a significant improvement on historical resource allocation patterns.

  10. Salutogenic factors for mental health promotion in work settings and organizations.

    PubMed

    Graeser, Silke

    2011-12-01

    Accompanied by an increasing awareness of companies and organizations for mental health conditions in work settings and organizations, the salutogenic perspective provides a promising approach to identify supportive factors and resources of organizations to promote mental health. Based on the sense of coherence (SOC) - usually treated as an individual and personality trait concept - an organization-based SOC scale was developed to identify potential salutogenic factors of a university as an organization and work place. Based on results of two samples of employees (n = 362, n = 204), factors associated with the organization-based SOC were evaluated. Statistical analysis yielded significant correlations between mental health and the setting-based SOC as well as the three factors of the SOC yielded by factor analysis yielded three factors comprehensibility, manageability and meaningfulness. Significant statistic results of bivariate and multivariate analyses emphasize the significance of aspects such as participation and comprehensibility referring to the organization, social cohesion and social climate on the social level, and recognition on the individual level for an organization-based SOC. Potential approaches for the further development of interventions for work-place health promotion based on salutogenic factors and resources on the individual, social and organization level are elaborated and the transcultural dimensions of these factors discussed.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendell, Mark J.; Fisk, William J.

    Background - The goal of this project, with a focus on commercial buildings in California, was to develop a new framework for evidence-based minimum ventilation rate (MVR) standards that protect occupants in buildings while also considering energy use and cost. This was motivated by research findings suggesting that current prescriptive MVRs in commercial buildings do not provide occupants with fully safe and satisfactory indoor environments. Methods - The project began with a broad review in several areas ? the diverse strategies now used for standards or guidelines for MVRs or for environmental contaminant exposures, current knowledge about adverse human effectsmore » associated with VRs, and current knowledge about contaminants in commercial buildings, including their their presence, their adverse human effects, and their relationships with VRs. Based on a synthesis of the reviewed information, new principles and approaches are proposed for setting evidence-based VRs standards for commercial buildings, considering a range of human effects including health, performance, and acceptability of air. Results ? A review and evaluation is first presented of current approaches to setting prescriptive building ventilation standards and setting acceptable limits for human contaminant exposures in outdoor air and occupational settings. Recent research on approaches to setting acceptable levels of environmental exposures in evidence-based MVR standards is also described. From a synthesis and critique of these materials, a set of principles for setting MVRs is presented, along with an example approach based on these principles. The approach combines two sequential strategies. In a first step, an acceptable threshold is set for each adverse outcome that has a demonstrated relationship to VRs, as an increase from a (low) outcome level at a high reference ventilation rate (RVR, the VR needed to attain the best achievable levels of the adverse outcome); MVRs required to meet each specific outcome threshold are estimated; and the highest of these MVRs, which would then meet all outcome thresholds, is selected as the target MVR. In a second step, implemented only if the target MVR from step 1 is judged impractically high, costs and benefits are estimated and this information is used in a risk management process. Four human outcomes with substantial quantitative evidence of relationships to VRs are identified for initial consideration in setting MVR standards. These are: building-related symptoms (sometimes called sick building syndrome symptoms), poor perceived indoor air quality, and diminished work performance, all with data relating them directly to VRs; and cancer and non-cancer chronic outcomes, related indirectly to VRs through specific VR-influenced indoor contaminants. In an application of step 1 for offices using a set of example outcome thresholds, a target MVR of 9 L/s (19 cfm) per person was needed. Because this target MVR was close to MVRs in current standards, use of a cost/benefit process seemed unnecessary. Selection of more stringent thresholds for one or more human outcomes, however, could raise the target MVR to 14 L/s (30 cfm) per person or higher, triggering the step 2 risk management process. Consideration of outdoor air pollutant effects would add further complexity to the framework. For balancing the objective and subjective factors involved in setting MVRs in a cost-benefit process, it is suggested that a diverse group of stakeholders make the determination after assembling as much quantitative data as possible.« less

  12. NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Yang; L.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the developers of APIs and how to use them to form a physics application to the users. For example, how the channels are related to magnet and what the current real-time setting of a magnet is in physics unit are the internals of APIs. Measuring chromaticities are the users of APIs. All the users of APIs are working with magnet and instrument names in a physics unit. The low level communications in current or voltage unit are minimized. In this paper, we discussed our recent progress of our infrastructure development, and client API.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  14. Priorities and needs for research on urban interventions targeting vector-borne diseases: rapid review of scoping and systematic reviews.

    PubMed

    Bermudez-Tamayo, Clara; Mukamana, Olive; Carabali, Mabel; Osorio, Lyda; Fournet, Florence; Dabiré, Kounbobr Roch; Turchi Marteli, Celina; Contreras, Adolfo; Ridde, Valéry

    2016-12-01

    This paper highlights the critical importance of evidence on vector-borne diseases (VBD) prevention and control interventions in urban settings when assessing current and future needs, with a view to setting policy priorities that promote inclusive and equitable urban health services. Research should produce knowledge about policies and interventions that are intended to control and prevent VBDs at the population level and to reduce inequities. Such interventions include policy, program, and resource distribution approaches that address the social determinants of health and exert influence at organizational and system levels.

  15. Computation and application of tissue-specific gene set weights.

    PubMed

    Frost, H Robert

    2018-04-06

    Gene set testing, or pathway analysis, has become a critical tool for the analysis of highdimensional genomic data. Although the function and activity of many genes and higher-level processes is tissue-specific, gene set testing is typically performed in a tissue agnostic fashion, which impacts statistical power and the interpretation and replication of results. To address this challenge, we have developed a bioinformatics approach to compute tissuespecific weights for individual gene sets using information on tissue-specific gene activity from the Human Protein Atlas (HPA). We used this approach to create a public repository of tissue-specific gene set weights for 37 different human tissue types from the HPA and all collections in the Molecular Signatures Database (MSigDB). To demonstrate the validity and utility of these weights, we explored three different applications: the functional characterization of human tissues, multi-tissue analysis for systemic diseases and tissue-specific gene set testing. All data used in the reported analyses is publicly available. An R implementation of the method and tissue-specific weights for MSigDB gene set collections can be downloaded at http://www.dartmouth.edu/∼hrfrost/TissueSpecificGeneSets. rob.frost@dartmouth.edu.

  16. SUPPORT Tools for evidence-informed health Policymaking (STP) 3: Setting priorities for supporting evidence-informed policymaking

    PubMed Central

    2009-01-01

    This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. Policymakers have limited resources for developing – or supporting the development of – evidence-informed policies and programmes. These required resources include staff time, staff infrastructural needs (such as access to a librarian or journal article purchasing), and ongoing professional development. They may therefore prefer instead to contract out such work to independent units with more suitably skilled staff and appropriate infrastructure. However, policymakers may only have limited financial resources to do so. Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. In this article, we suggest questions that can guide those setting priorities for finding and using research evidence to support evidence-informed policymaking. These are: 1. Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? 2. Does the approach incorporate explicit criteria for determining priorities? 3. Does the approach incorporate an explicit process for determining priorities? 4. Does the approach incorporate a communications strategy and a monitoring and evaluation plan? PMID:20018110

  17. SUPPORT Tools for evidence-informed health Policymaking (STP) 3: Setting priorities for supporting evidence-informed policymaking.

    PubMed

    Lavis, John N; Oxman, Andrew D; Lewin, Simon; Fretheim, Atle

    2009-12-16

    This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. Policymakers have limited resources for developing--or supporting the development of--evidence-informed policies and programmes. These required resources include staff time, staff infrastructural needs (such as access to a librarian or journal article purchasing), and ongoing professional development. They may therefore prefer instead to contract out such work to independent units with more suitably skilled staff and appropriate infrastructure. However, policymakers may only have limited financial resources to do so. Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. In this article, we suggest questions that can guide those setting priorities for finding and using research evidence to support evidence-informed policymaking. These are: 1. Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? 2. Does the approach incorporate explicit criteria for determining priorities? 3. Does the approach incorporate an explicit process for determining priorities? 4. Does the approach incorporate a communications strategy and a monitoring and evaluation plan?

  18. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  19. Comparison of thyroid segmentation techniques for 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.

    2017-02-01

    The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.

  20. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  1. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  2. Supporting nurse practitioners' practice in primary healthcare settings: a three-level qualitative model.

    PubMed

    Chouinard, Véronique; Contandriopoulos, Damien; Perroux, Mélanie; Larouche, Catherine

    2017-06-26

    While greater reliance on nurse practitioners in primary healthcare settings can improve service efficiency and accessibility, their integration is not straightforward, challenging existing role definitions of both registered nurses and physicians. Developing adequate support practices is therefore essential in primary healthcare nurse practitioners' integration. This study's main objective is to examine different structures and mechanisms put in place to support the development of primary healthcare nurse practitioner's practice in different healthcare settings, and develop a practical model for identifying and planning adequate support practices. This study is part of a larger multicentre study on primary healthcare nurse practitioners in the province of Quebec, Canada. It focuses on three healthcare settings into which one or more primary healthcare nurse practitioners have been integrated. Case studies have been selected to cover a maximum of variations in terms of location, organizational setting, and stages of primary healthcare nurse practitioner integration. Findings are based on the analysis of available documentation in each primary healthcare setting and on semi-structured interviews with key actors in each clinical team. Data were analyzed following thematic and cross-sectional analysis approaches. This article identifies three types of support practices: clinical, team, and systemic. This three-level analysis demonstrates that, on the ground, primary healthcare nurse practitioner integration is essentially a team-based, multilevel endeavour. Despite the existence of a provincial implementation plan, the three settings adopted very different implementation structures and practices, and different actors were involved at each of the three levels. The results also indicated that nursing departments played a decisive role at all three levels. Based on these findings, we suggest that support practices should be adapted to each organization's environment and experience and be modified as needed throughout the integration process. We also stress the importance of combining this approach with a strong coordination mechanism involving managers who have in-depth understanding of nursing professional roles and scopes of practice. Making primary healthcare nurse practitioner integration frameworks more flexible and clarifying and strengthening the role of senior nursing managers could be the key to successful integration.

  3. A nearest neighbour approach by genetic distance to the assignment of individual trees to geographic origin.

    PubMed

    Degen, Bernd; Blanc-Jolivet, Céline; Stierand, Katrin; Gillet, Elizabeth

    2017-03-01

    During the past decade, the use of DNA for forensic applications has been extensively implemented for plant and animal species, as well as in humans. Tracing back the geographical origin of an individual usually requires genetic assignment analysis. These approaches are based on reference samples that are grouped into populations or other aggregates and intend to identify the most likely group of origin. Often this grouping does not have a biological but rather a historical or political justification, such as "country of origin". In this paper, we present a new nearest neighbour approach to individual assignment or classification within a given but potentially imperfect grouping of reference samples. This method, which is based on the genetic distance between individuals, functions better in many cases than commonly used methods. We demonstrate the operation of our assignment method using two data sets. One set is simulated for a large number of trees distributed in a 120km by 120km landscape with individual genotypes at 150 SNPs, and the other set comprises experimental data of 1221 individuals of the African tropical tree species Entandrophragma cylindricum (Sapelli) genotyped at 61 SNPs. Judging by the level of correct self-assignment, our approach outperformed the commonly used frequency and Bayesian approaches by 15% for the simulated data set and by 5-7% for the Sapelli data set. Our new approach is less sensitive to overlapping sources of genetic differentiation, such as genetic differences among closely-related species, phylogeographic lineages and isolation by distance, and thus operates better even for suboptimal grouping of individuals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Identifying Aquifer Heterogeneities using the Level Set Method

    NASA Astrophysics Data System (ADS)

    Lu, Z.; Vesselinov, V. V.; Lei, H.

    2016-12-01

    Material interfaces between hydrostatigraphic units (HSU) with contrasting aquifer parameters (e.g., strata and facies with different hydraulic conductivity) have a great impact on flow and contaminant transport in subsurface. However, the identification of HSU shape in the subsurface is challenging and typically relies on tomographic approaches where a series of steady-state/transient head measurements at spatially distributed observation locations are analyzed using inverse models. In this study, we developed a mathematically rigorous approach for identifying material interfaces among any arbitrary number of HSUs using the level set method. The approach has been tested first with several synthetic cases, where the true spatial distribution of HSUs was assumed to be known and the head measurements were taken from the flow simulation with the true parameter fields. These synthetic inversion examples demonstrate that the level set method is capable of characterizing the spatial distribution of the heterogeneous. We then applied the methodology to a large-scale problem in which the spatial distribution of pumping wells and observation well screens is consistent with the actual aquifer contamination (chromium) site at the Los Alamos National Laboratory (LANL). In this way, we test the applicability of the methodology at an actual site. We also present preliminary results using the actual LANL site data. We also investigated the impact of the number of pumping/observation wells and the drawdown observation frequencies/intervals on the quality of the inversion results. We also examined the uncertainties associated with the estimated HSU shapes, and the accuracy of the results under different hydraulic-conductivity contrasts between the HSU's.

  5. A Graphical Approach to Item Analysis. Research Report. ETS RR-04-10

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Dorans, Neil J.

    2004-01-01

    This paper describes an approach to item analysis that is based on the estimation of a set of response curves for each item. The response curves show, at a glance, the difficulty and the discriminating power of the item and the popularity of each distractor, at any level of the criterion variable (e.g., total score). The curves are estimated by…

  6. A Comparison of Inferencing and Meaning-Guessing of New Lexicon in Context versus Non-Context Vocabulary Presentation

    ERIC Educational Resources Information Center

    Zaid, Mohammed A.

    2009-01-01

    In a quasi-experimental study, the researcher used two approaches to vocabulary instruction with 34 Level III College of Languages and Translation students, enrolled in King Khalid University, KSA. The purpose of the study was to explore the effects of each approach. One strategy emphasized direct teaching of the individual meanings for a set of…

  7. An Evaluation of Using Data Set in Teaching and Learning within a Further and Higher Education Context

    ERIC Educational Resources Information Center

    Williams, Anwen; Dewi, Ioan Ap

    2007-01-01

    This study aims to establish the effectiveness of an industrial data set in the delivery of an active approach to teaching and learning across a range of programme levels from NVQ to MSc within Further Education (FE) and Higher Education (HE) Institutions in North Wales. The result described in the paper reinforces in FE and HE the advantages of…

  8. Use of Biodescriptors and Chemodescriptors in Predictive Toxicology: A Mathematical/Computational Approach

    DTIC Science & Technology

    2005-01-01

    proteomic gel analyses. The research group has explored the use of chemodescriptors calculated using high-level ab initio quantum chemical basis sets...descriptors that characterize the entire proteomics map, local descriptors that characterize a subset of the proteins present in the gel, and spectrum...techniques for analyzing the full set of proteins present in a proteomics map. 14. SUBJECT TERMS 1S. NUMBER OF PAGES Topological indices

  9. Nonlinear Socio-Ecological Dynamics and First Principles ofCollective Choice Behavior of ``Homo Socialis"

    NASA Astrophysics Data System (ADS)

    Sonis, M.

    Socio-ecological dynamics emerged from the field of Mathematical SocialSciences and opened up avenues for re-examination of classical problems of collective behavior in Social and Spatial sciences. The ``engine" of this collective behavior is the subjective mental evaluation of level of utilities in the future, presenting sets of composite socio-economic-temporal-locational advantages. These dynamics present new laws of collective multi-population behavior which are the meso-level counterparts of the utility optimization individual behavior. The central core of the socio-ecological choice dynamics includes the following first principle of the collective choice behavior of ``Homo Socialis" based on the existence of ``collective consciousness": the choice behavior of ``Homo Socialis" is a collective meso-level choice behavior such that the relative changes in choice frequencies depend on the distribution of innovation alternatives between adopters of innovations. The mathematical basis of the Socio-Ecological Dynamics includes two complementary analytical approaches both based on the use of computer modeling as a theoretical and simulation tool. First approach is the ``continuous approach" --- the systems of ordinary and partial differential equations reflecting the continuous time Volterra ecological formalism in a form of antagonistic and/or cooperative collective hyper-games between different sub-sets of choice alternatives. Second approach is the ``discrete approach" --- systems of difference equations presenting a new branch of the non-linear discrete dynamics --- the Discrete Relative m-population/n-innovations Socio-Spatial Dynamics (Dendrinos and Sonis, 1990). The generalization of the Volterra formalism leads further to the meso-level variational principle of collective choice behavior determining the balance between the resulting cumulative social spatio-temporal interactions among the population of adopters susceptible to the choice alternatives and the cumulative equalization of the power of elites supporting different choice alternatives. This balance governs the dynamic innovation choice process and constitutes the dynamic meso-level counterpart of the micro-economic individual utility maximization principle.

  10. Fast Object Motion Estimation Based on Dynamic Stixels.

    PubMed

    Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo

    2016-07-28

    The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.

  11. RAMONA: a Web application for gene set analysis on multilevel omics data.

    PubMed

    Sass, Steffen; Buettner, Florian; Mueller, Nikola S; Theis, Fabian J

    2015-01-01

    Decreasing costs of modern high-throughput experiments allow for the simultaneous analysis of altered gene activity on various molecular levels. However, these multi-omics approaches lead to a large amount of data, which is hard to interpret for a non-bioinformatician. Here, we present the remotely accessible multilevel ontology analysis (RAMONA). It offers an easy-to-use interface for the simultaneous gene set analysis of combined omics datasets and is an extension of the previously introduced MONA approach. RAMONA is based on a Bayesian enrichment method for the inference of overrepresented biological processes among given gene sets. Overrepresentation is quantified by interpretable term probabilities. It is able to handle data from various molecular levels, while in parallel coping with redundancies arising from gene set overlaps and related multiple testing problems. The comprehensive output of RAMONA is easy to interpret and thus allows for functional insight into the affected biological processes. With RAMONA, we provide an efficient implementation of the Bayesian inference problem such that ontologies consisting of thousands of terms can be processed in the order of seconds. RAMONA is implemented as ASP.NET Web application and publicly available at http://icb.helmholtz-muenchen.de/ramona. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  13. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  14. Computing sextic centrifugal distortion constants by DFT: A benchmark analysis on halogenated compounds

    NASA Astrophysics Data System (ADS)

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi

    2017-05-01

    This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.

  15. Inference of combinatorial Boolean rules of synergistic gene sets from cancer microarray datasets.

    PubMed

    Park, Inho; Lee, Kwang H; Lee, Doheon

    2010-06-15

    Gene set analysis has become an important tool for the functional interpretation of high-throughput gene expression datasets. Moreover, pattern analyses based on inferred gene set activities of individual samples have shown the ability to identify more robust disease signatures than individual gene-based pattern analyses. Although a number of approaches have been proposed for gene set-based pattern analysis, the combinatorial influence of deregulated gene sets on disease phenotype classification has not been studied sufficiently. We propose a new approach for inferring combinatorial Boolean rules of gene sets for a better understanding of cancer transcriptome and cancer classification. To reduce the search space of the possible Boolean rules, we identify small groups of gene sets that synergistically contribute to the classification of samples into their corresponding phenotypic groups (such as normal and cancer). We then measure the significance of the candidate Boolean rules derived from each group of gene sets; the level of significance is based on the class entropy of the samples selected in accordance with the rules. By applying the present approach to publicly available prostate cancer datasets, we identified 72 significant Boolean rules. Finally, we discuss several identified Boolean rules, such as the rule of glutathione metabolism (down) and prostaglandin synthesis regulation (down), which are consistent with known prostate cancer biology. Scripts written in Python and R are available at http://biosoft.kaist.ac.kr/~ihpark/. The refined gene sets and the full list of the identified Boolean rules are provided in the Supplementary Material. Supplementary data are available at Bioinformatics online.

  16. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  17. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  18. Classification of Complete Proteomes of Different Organisms and Protein Sets Based on Their Protein Distributions in Terms of Some Key Attributes of Proteins

    PubMed Central

    Ma, Yue; Tuskan, Gerald A.

    2018-01-01

    The existence of complete genome sequences makes it important to develop different approaches for classification of large-scale data sets and to make extraction of biological insights easier. Here, we propose an approach for classification of complete proteomes/protein sets based on protein distributions on some basic attributes. We demonstrate the usefulness of this approach by determining protein distributions in terms of two attributes: protein lengths and protein intrinsic disorder contents (ID). The protein distributions based on L and ID are surveyed for representative proteome organisms and protein sets from the three domains of life. The two-dimensional maps (designated as fingerprints here) from the protein distribution densities in the LD space defined by ln(L) and ID are then constructed. The fingerprints for different organisms and protein sets are found to be distinct with each other, and they can therefore be used for comparative studies. As a test case, phylogenetic trees have been constructed based on the protein distribution densities in the fingerprints of proteomes of organisms without performing any protein sequence comparison and alignments. The phylogenetic trees generated are biologically meaningful, demonstrating that the protein distributions in the LD space may serve as unique phylogenetic signals of the organisms at the proteome level. PMID:29686995

  19. Transcending Landscapes: Working Across Scales and Levels in Pastoralist Rangeland Governance

    NASA Astrophysics Data System (ADS)

    Robinson, Lance W.; Ontiri, Enoch; Alemu, Tsegaye; Moiko, Stephen S.

    2017-08-01

    Landscape approaches can be subjected to mistakenly targeting a single "best" level of governance, and paying too little attention to the role that cross-scale and cross-level interactions play in governance. In rangeland settings, resources, patterns of use of those resources, and the institutions for managing the resources exist at multiple levels and scales. While the scholarship on commons offers some guidance on how to conceptualize governance in rangeland landscapes, some elements of commons scholarship—notably the "design principles" for effective governance of commons—do not seem to apply neatly to governance in pastoralist rangeland settings. This paper examines three cases where attempts have been made to foster effective landscape governance in such settings to consider how the materiality of commons influences the nature of cross-scale and cross-level interactions, and how these interactions affect governance. In all three cases, although external actors seemed to work appropriately and effectively at community and landscape levels, landscape governance mechanisms have been facing great challenges arising from relationships beyond the landscape, both vertically to higher levels of decision-making and horizontally to communities normally residing in other landscapes. The cases demonstrate that fostering effective landscape-level governance cannot be accomplished only through action at the landscape level; it is a task that must be pursued at multiple levels and in relation to the connections across scales and levels. The paper suggests elements of a conceptual framework for understanding cross-level and cross-scale elements of landscape governance, and offers suggestions for governance design in pastoralist rangeland settings.

  20. Transcending Landscapes: Working Across Scales and Levels in Pastoralist Rangeland Governance.

    PubMed

    Robinson, Lance W; Ontiri, Enoch; Alemu, Tsegaye; Moiko, Stephen S

    2017-08-01

    Landscape approaches can be subjected to mistakenly targeting a single "best" level of governance, and paying too little attention to the role that cross-scale and cross-level interactions play in governance. In rangeland settings, resources, patterns of use of those resources, and the institutions for managing the resources exist at multiple levels and scales. While the scholarship on commons offers some guidance on how to conceptualize governance in rangeland landscapes, some elements of commons scholarship-notably the "design principles" for effective governance of commons-do not seem to apply neatly to governance in pastoralist rangeland settings. This paper examines three cases where attempts have been made to foster effective landscape governance in such settings to consider how the materiality of commons influences the nature of cross-scale and cross-level interactions, and how these interactions affect governance. In all three cases, although external actors seemed to work appropriately and effectively at community and landscape levels, landscape governance mechanisms have been facing great challenges arising from relationships beyond the landscape, both vertically to higher levels of decision-making and horizontally to communities normally residing in other landscapes. The cases demonstrate that fostering effective landscape-level governance cannot be accomplished only through action at the landscape level; it is a task that must be pursued at multiple levels and in relation to the connections across scales and levels. The paper suggests elements of a conceptual framework for understanding cross-level and cross-scale elements of landscape governance, and offers suggestions for governance design in pastoralist rangeland settings.

  1. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  2. A Nonparametric Approach to Automated S-Wave Picking

    NASA Astrophysics Data System (ADS)

    Rawles, C.; Thurber, C. H.

    2014-12-01

    Although a number of very effective P-wave automatic pickers have been developed over the years, automatic picking of S waves has remained more challenging. Most automatic pickers take a parametric approach, whereby some characteristic function (CF), e.g. polarization or kurtosis, is determined from the data and the pick is estimated from the CF. We have adopted a nonparametric approach, estimating the pick directly from the waveforms. For a particular waveform to be auto-picked, the method uses a combination of similarity to a set of seismograms with known S-wave arrivals and dissimilarity to a set of seismograms that do not contain S-wave arrivals. Significant effort has been made towards dealing with the problem of S-to-P conversions. We have evaluated the effectiveness of our method by testing it on multiple sets of microearthquake seismograms with well-determined S-wave arrivals for several areas around the world, including fault zones and volcanic regions. In general, we find that the results from our auto-picker are consistent with reviewed analyst picks 90% of the time at the 0.2 s level and 80% of the time at the 0.1 s level, or better. For most of the large datasets we have analyzed, our auto-picker also makes far more S-wave picks than were made previously by analysts. We are using these enlarged sets of high-quality S-wave picks to refine tomographic inversions for these areas, resulting in substantial improvement in the quality of the S-wave images. We will show examples from New Zealand, Hawaii, and California.

  3. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    PubMed

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  4. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    PubMed Central

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach. PMID:25097884

  5. Measuring Prices in Health Care Markets Using Commercial Claims Data.

    PubMed

    Neprash, Hannah T; Wallace, Jacob; Chernew, Michael E; McWilliams, J Michael

    2015-12-01

    To compare methods of price measurement in health care markets. Truven Health Analytics MarketScan commercial claims. We constructed medical prices indices using three approaches: (1) a "sentinel" service approach based on a single common service in a specific clinical domain, (2) a market basket approach, and (3) a spending decomposition approach. We constructed indices at the Metropolitan Statistical Area level and estimated correlations between and within them. Price indices using a spending decomposition approach were strongly and positively correlated with indices constructed from broad market baskets of common services (r > 0.95). Prices of single common services exhibited weak to moderate correlations with each other and other measures. Market-level price measures that reflect broad sets of services are likely to rank markets similarly. Price indices relying on individual sentinel services may be more appropriate for examining specialty- or service-specific drivers of prices. © Health Research and Educational Trust.

  6. Tools for the functional interpretation of metabolomic experiments.

    PubMed

    Chagoyen, Monica; Pazos, Florencio

    2013-11-01

    The so-called 'omics' approaches used in modern biology aim at massively characterizing the molecular repertories of living systems at different levels. Metabolomics is one of the last additions to the 'omics' family and it deals with the characterization of the set of metabolites in a given biological system. As metabolomic techniques become more massive and allow characterizing larger sets of metabolites, automatic methods for analyzing these sets in order to obtain meaningful biological information are required. Only recently the first tools specifically designed for this task in metabolomics appeared. They are based on approaches previously used in transcriptomics and other 'omics', such as annotation enrichment analysis. These, together with generic tools for metabolic analysis and visualization not specifically designed for metabolomics will for sure be in the toolbox of the researches doing metabolomic experiments in the near future.

  7. Hesitant fuzzy linguistic multicriteria decision-making method based on generalized prioritized aggregation operator.

    PubMed

    Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2014-01-01

    Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.

  8. Discrete-time moment closure models for epidemic spreading in populations of interacting individuals.

    PubMed

    Frasca, Mattia; Sharkey, Kieran J

    2016-06-21

    Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Predictive Models for Semiconductor Device Design and Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1998-01-01

    The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.

  10. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    PubMed Central

    Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas

    2016-01-01

    To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782

  11. Towards integrated hygiene and food safety management systems: the Hygieneomic approach.

    PubMed

    Armstrong, G D

    1999-09-15

    Integrated hygiene and food safety management systems in food production can give rise to exceptional improvements in food safety performance, but require high level commitment and full functional involvement. A new approach, named hygieneomics, has been developed to assist management in their introduction of hygiene and food safety systems. For an effective introduction, the management systems must be designed to fit with the current generational state of an organisation. There are, broadly speaking, four generational states of an organisation in their approach to food safety. They comprise: (i) rules setting; (ii) ensuring compliance; (iii) individual commitment; (iv) interdependent action. In order to set up an effective integrated hygiene and food safety management system a number of key managerial requirements are necessary. The most important ones are: (a) management systems must integrate the activities of key functions from research and development through to supply chain and all functions need to be involved; (b) there is a critical role for the senior executive, in communicating policy and standards; (c) responsibilities must be clearly defined, and it should be clear that food safety is a line management responsibility not to be delegated to technical or quality personnel; (d) a thorough and effective multi-level audit approach is necessary; (e) key activities in the system are HACCP and risk management, but it is stressed that these are ongoing management activities, not once-off paper generating exercises; and (f) executive management board level review is necessary of audit results, measurements, status and business benefits.

  12. Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.

    2017-11-01

    Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.

  13. Joint perceptual decision-making: a case study in explanatory pluralism

    PubMed Central

    Abney, Drew H.; Dale, Rick; Yoshimi, Jeff; Kello, Chris T.; Tylén, Kristian; Fusaroli, Riccardo

    2014-01-01

    Traditionally different approaches to the study of cognition have been viewed as competing explanatory frameworks. An alternative view, explanatory pluralism, regards different approaches to the study of cognition as complementary ways of studying the same phenomenon, at specific temporal and spatial scales, using appropriate methodological tools. Explanatory pluralism has been often described abstractly, but has rarely been applied to concrete cases. We present a case study of explanatory pluralism. We discuss three separate ways of studying the same phenomenon: a perceptual decision-making task (Bahrami et al., 2010), where pairs of subjects share information to jointly individuate an oddball stimulus among a set of distractors. Each approach analyzed the same corpus but targeted different units of analysis at different levels of description: decision-making at the behavioral level, confidence sharing at the linguistic level, and acoustic energy at the physical level. We discuss the utility of explanatory pluralism for describing this complex, multiscale phenomenon, show ways in which this case study sheds new light on the concept of pluralism, and highlight good practices to critically assess and complement approaches. PMID:24795679

  14. Combined expert system/neural networks method for process fault diagnosis

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1995-01-01

    A two-level hierarchical approach for process fault diagnosis is an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach.

  15. Combined expert system/neural networks method for process fault diagnosis

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1995-08-15

    A two-level hierarchical approach for process fault diagnosis of an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach. 9 figs.

  16. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  17. Semantic enrichment of clinical models towards semantic interoperability. The heart failure summary use case.

    PubMed

    Martínez-Costa, Catalina; Cornet, Ronald; Karlsson, Daniel; Schulz, Stefan; Kalra, Dipak

    2015-05-01

    To improve semantic interoperability of electronic health records (EHRs) by ontology-based mediation across syntactically heterogeneous representations of the same or similar clinical information. Our approach is based on a semantic layer that consists of: (1) a set of ontologies supported by (2) a set of semantic patterns. The first aspect of the semantic layer helps standardize the clinical information modeling task and the second shields modelers from the complexity of ontology modeling. We applied this approach to heterogeneous representations of an excerpt of a heart failure summary. Using a set of finite top-level patterns to derive semantic patterns, we demonstrate that those patterns, or compositions thereof, can be used to represent information from clinical models. Homogeneous querying of the same or similar information, when represented according to heterogeneous clinical models, is feasible. Our approach focuses on the meaning embedded in EHRs, regardless of their structure. This complex task requires a clear ontological commitment (ie, agreement to consistently use the shared vocabulary within some context), together with formalization rules. These requirements are supported by semantic patterns. Other potential uses of this approach, such as clinical models validation, require further investigation. We show how an ontology-based representation of a clinical summary, guided by semantic patterns, allows homogeneous querying of heterogeneous information structures. Whether there are a finite number of top-level patterns is an open question. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. A hybrid wavelet de-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series.

    PubMed

    Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-01-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Formulating Spatially Varying Performance in the Statistical Fusion Framework

    PubMed Central

    Landman, Bennett A.

    2012-01-01

    To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513

  20. Exponentially Stabilizing Robot Control Laws

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1990-01-01

    New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.

  1. TIM Barrel Protein Structure Classification Using Alignment Approach and Best Hit Strategy

    NASA Astrophysics Data System (ADS)

    Chu, Jia-Han; Lin, Chun Yuan; Chang, Cheng-Wen; Lee, Chihan; Yang, Yuh-Shyong; Tang, Chuan Yi

    2007-11-01

    The classification of protein structures is essential for their function determination in bioinformatics. It has been estimated that around 10% of all known enzymes have TIM barrel domains from the Structural Classification of Proteins (SCOP) database. With its high sequence variation and diverse functionalities, TIM barrel protein becomes to be an attractive target for protein engineering and for the evolution study. Hence, in this paper, an alignment approach with the best hit strategy is proposed to classify the TIM barrel protein structure in terms of superfamily and family levels in the SCOP. This work is also used to do the classification for class level in the Enzyme nomenclature (ENZYME) database. Two testing data sets, TIM40D and TIM95D, both are used to evaluate this approach. The resulting classification has an overall prediction accuracy rate of 90.3% for the superfamily level in the SCOP, 89.5% for the family level in the SCOP and 70.1% for the class level in the ENZYME. These results demonstrate that the alignment approach with the best hit strategy is a simple and viable method for the TIM barrel protein structure classification, even only has the amino acid sequences information.

  2. Statistical Test of Expression Pattern (STEPath): a new strategy to integrate gene expression data with genomic information in individual and meta-analysis studies.

    PubMed

    Martini, Paolo; Risso, Davide; Sales, Gabriele; Romualdi, Chiara; Lanfranchi, Gerolamo; Cagnin, Stefano

    2011-04-11

    In the last decades, microarray technology has spread, leading to a dramatic increase of publicly available datasets. The first statistical tools developed were focused on the identification of significant differentially expressed genes. Later, researchers moved toward the systematic integration of gene expression profiles with additional biological information, such as chromosomal location, ontological annotations or sequence features. The analysis of gene expression linked to physical location of genes on chromosomes allows the identification of transcriptionally imbalanced regions, while, Gene Set Analysis focuses on the detection of coordinated changes in transcriptional levels among sets of biologically related genes. In this field, meta-analysis offers the possibility to compare different studies, addressing the same biological question to fully exploit public gene expression datasets. We describe STEPath, a method that starts from gene expression profiles and integrates the analysis of imbalanced region as an a priori step before performing gene set analysis. The application of STEPath in individual studies produced gene set scores weighted by chromosomal activation. As a final step, we propose a way to compare these scores across different studies (meta-analysis) on related biological issues. One complication with meta-analysis is batch effects, which occur because molecular measurements are affected by laboratory conditions, reagent lots and personnel differences. Major problems occur when batch effects are correlated with an outcome of interest and lead to incorrect conclusions. We evaluated the power of combining chromosome mapping and gene set enrichment analysis, performing the analysis on a dataset of leukaemia (example of individual study) and on a dataset of skeletal muscle diseases (meta-analysis approach). In leukaemia, we identified the Hox gene set, a gene set closely related to the pathology that other algorithms of gene set analysis do not identify, while the meta-analysis approach on muscular disease discriminates between related pathologies and correlates similar ones from different studies. STEPath is a new method that integrates gene expression profiles, genomic co-expressed regions and the information about the biological function of genes. The usage of the STEPath-computed gene set scores overcomes batch effects in the meta-analysis approaches allowing the direct comparison of different pathologies and different studies on a gene set activation level.

  3. Psychosocial interventions in patients with dual diagnosis

    PubMed Central

    Subodh, B.N; Sharma, Nidhi; Shah, Raghav

    2018-01-01

    Management of patients with dual diagnosis (Mental illness and substance use disorders) is a challenge. A lack of improvement in either disorder can lead to a relapse in both. The current consensus opinion favours an integrated approach to management of both the disorders wherein the same team of professionals manages both the disorders in the same setting. The role of pharmacotherapy for such dual diagnosis patients is well established but the non-pharmacological approaches for their management are still evolving. After stabilization of the acute phase of illnesses, non-pharmacological management takes centre stage. Evidence points to the beneficial effect of psychosocial approaches in maintaining abstinence, adherence to medication, maintenance of a healthy life style, better integration in to community, occupational rehabilitation and an overall improvement in functioning. Psychosocial approaches although beneficial, are difficult to implement. They require teamwork, involving professionals other than psychiatrists and psychologists alone. These approaches need to be comprehensive, individualized and require training to various levels that is difficult to achieve in most Indian settings. In this article we provide a brief review of these approaches. PMID:29540920

  4. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images

    PubMed Central

    Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.

    2016-01-01

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012

  5. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images.

    PubMed

    Hoffman, R A; Kothari, S; Phan, J H; Wang, M D

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.

  6. A model-driven approach to information security compliance

    NASA Astrophysics Data System (ADS)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  7. Experiments in Reconstructing Twentieth-Century Sea Levels

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Douglas, Bruce C.

    2011-01-01

    One approach to reconstructing historical sea level from the relatively sparse tide-gauge network is to employ Empirical Orthogonal Functions (EOFs) as interpolatory spatial basis functions. The EOFs are determined from independent global data, generally sea-surface heights from either satellite altimetry or a numerical ocean model. The problem is revisited here for sea level since 1900. A new approach to handling the tide-gauge datum problem by direct solution offers possible advantages over the method of integrating sea-level differences, with the potential of eventually adjusting datums into the global terrestrial reference frame. The resulting time series of global mean sea levels appears fairly insensitive to the adopted set of EOFs. In contrast, charts of regional sea level anomalies and trends are very sensitive to the adopted set of EOFs, especially for the sparser network of gauges in the early 20th century. The reconstructions appear especially suspect before 1950 in the tropical Pacific. While this limits some applications of the sea-level reconstructions, the sensitivity does appear adequately captured by formal uncertainties. All our solutions show regional trends over the past five decades to be fairly uniform throughout the global ocean, in contrast to trends observed over the shorter altimeter era. Consistent with several previous estimates, the global sea-level rise since 1900 is 1.70 +/- 0.26 mm/yr. The global trend since 1995 exceeds 3 mm/yr which is consistent with altimeter measurements, but this large trend was possibly also reached between 1935 and 1950.

  8. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    PubMed

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  9. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  10. Successful adaptation of three-dimensional inversion methodologies for archaeological-scale, total-field magnetic data sets

    NASA Astrophysics Data System (ADS)

    Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.

    2015-08-01

    Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.

  11. Robotic Access to Planetary Surfaces Capability Roadmap

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A set of robotic access to planetary surfaces capability developments and supporting infrastructure have been identified. Reference mission pulls derived from ongoing strategic planning. Capability pushes to enable broader mission considerations. Facility and flight test capability needs. Those developments have been described to the level of detail needed for high-level planning. Content and approach. Readiness and metrics. Rough schedule and cost. Connectivity to mission concepts.

  12. A Constructivist Approach to Game-Based Language Learning: Student Perceptions in a Beginner-Level EFL Context

    ERIC Educational Resources Information Center

    York, James; deHaan, Jonathan William

    2018-01-01

    This article provides information on an action research project in a low-level EFL setting in Japan. The project aims were to (1) foster spoken communication skills and (2) help students engage with their own learning. The project investigated the applicability of board games as a mediating tool for authentic communication as part of a wider TBLT…

  13. Graph coarse-graining reveals differences in the module-level structure of functional brain networks.

    PubMed

    Kujala, Rainer; Glerean, Enrico; Pan, Raj Kumar; Jääskeläinen, Iiro P; Sams, Mikko; Saramäki, Jari

    2016-11-01

    Networks have become a standard tool for analyzing functional magnetic resonance imaging (fMRI) data. In this approach, brain areas and their functional connections are mapped to the nodes and links of a network. Even though this mapping reduces the complexity of the underlying data, it remains challenging to understand the structure of the resulting networks due to the large number of nodes and links. One solution is to partition networks into modules and then investigate the modules' composition and relationship with brain functioning. While this approach works well for single networks, understanding differences between two networks by comparing their partitions is difficult and alternative approaches are thus necessary. To this end, we present a coarse-graining framework that uses a single set of data-driven modules as a frame of reference, enabling one to zoom out from the node- and link-level details. As a result, differences in the module-level connectivity can be understood in a transparent, statistically verifiable manner. We demonstrate the feasibility of the method by applying it to networks constructed from fMRI data recorded from 13 healthy subjects during rest and movie viewing. While independently partitioning the rest and movie networks is shown to yield little insight, the coarse-graining framework enables one to pinpoint differences in the module-level structure, such as the increased number of intra-module links within the visual cortex during movie viewing. In addition to quantifying differences due to external stimuli, the approach could also be applied in clinical settings, such as comparing patients with healthy controls. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Chemometric analysis of soil pollution data using the Tucker N-way method.

    PubMed

    Stanimirova, I; Zehl, K; Massart, D L; Vander Heyden, Y; Einax, J W

    2006-06-01

    N-way methods, particularly the Tucker method, are often the methods of choice when analyzing data sets arranged in three- (or higher) way arrays, which is the case for most environmental data sets. In the future, applying N-way methods will become an increasingly popular way to uncover hidden information in complex data sets. The reason for this is that classical two-way approaches such as principal component analysis are not as good at revealing the complex relationships present in data sets. This study describes in detail the application of a chemometric N-way approach, namely the Tucker method, in order to evaluate the level of pollution in soil from a contaminated site. The analyzed soil data set was five-way in nature. The samples were collected at different depths (way 1) from two locations (way 2) and the levels of thirteen metals (way 3) were analyzed using a four-step-sequential extraction procedure (way 4), allowing detailed information to be obtained about the bioavailability and activity of the different binding forms of the metals. Furthermore, the measurements were performed under two conditions (way 5), inert and non-inert. The preferred Tucker model of definite complexity showed that there was no significant difference in measurements analyzed under inert or non-inert conditions. It also allowed two depth horizons, characterized by different accumulation pathways, to be distinguished, and it allowed the relationships between chemical elements and their biological activities and mobilities in the soil to be described in detail.

  15. 'I'm not an outsider, I'm his mother!' A phenomenological enquiry into carer experiences of exclusion from acute psychiatric settings.

    PubMed

    Wilkinson, Claire; McAndrew, Sue

    2008-12-01

    Contemporary standards and policies advocate carer involvement in planning, implementing, and evaluating mental health services. Critics have questioned why such standards and policies fail to move from rhetoric to reality, this particularly being applicable to carer involvement within acute psychiatric settings. As there is only limited UK research on this topic, this interpretive phenomenological study was undertaken to explore the perceived level of involvement from the perspective of carers of service users who were admitted to acute inpatient settings within the previous 2 years. Interviews were conducted with four individuals who cared for a loved one with a mental illness. The interview analysis was influenced by Van Manen, whose interpretive approach seeks to generate a deeper understanding of the phenomenon under study. Four main themes emerged: powerlessness, feeling isolated, needing to be recognized and valued, and a desire for partnership. The findings reflect the views expressed by carers in other studies, identifying that while carers seek to work in partnership with health-care professionals, at a clinical level they often feel excluded. The study concludes by discussing ways of improving and promoting carer involvement and advocating a partnership in care approach within acute psychiatry.

  16. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  17. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    PubMed

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  18. Co-Instructing at the Secondary Level: Strategies for Success

    ERIC Educational Resources Information Center

    Rice, Nancy; Drame, Elizabeth; Owens, Laura; Frattura, Elise M.

    2007-01-01

    Much has been written about collaboration and co-teaching in secondary settings, including the importance of understanding one's partner's teaching approach; determining readiness to co-teach; clarifying roles, responsibilities, and expectations; scheduling shared planning time; and effective communication, including constructive dialogue and…

  19. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  20. Comparative shotgun proteomics using spectral count data and quasi-likelihood modeling.

    PubMed

    Li, Ming; Gray, William; Zhang, Haixia; Chung, Christine H; Billheimer, Dean; Yarbrough, Wendell G; Liebler, Daniel C; Shyr, Yu; Slebos, Robbert J C

    2010-08-06

    Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography-tandem mass spectrometry (LC-MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher's Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue phenotypes are sufficiently rich in quantitative information and that statistically significant differences in proteins spectral counts reflect the underlying biology of the samples.

  1. Comparative Shotgun Proteomics Using Spectral Count Data and Quasi-Likelihood Modeling

    PubMed Central

    2010-01-01

    Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography−tandem mass spectrometry (LC−MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher’s Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography−multiple reaction monitoring mass spectrometry (LC−MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue phenotypes are sufficiently rich in quantitative information and that statistically significant differences in proteins spectral counts reflect the underlying biology of the samples. PMID:20586475

  2. [Uncertainty characterization approaches for ecological risk assessment of polycyclic aromatic hydrocarbon in Taihu Lake].

    PubMed

    Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian

    2012-04-01

    Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.

  3. Looking Ahead Toward Community-Level Strategies to Prevent Sexual Violence

    PubMed Central

    DeGue, Sarah; Holt, Melissa K.; Massetti, Greta M.; Matjasko, Jennifer L.; Tharp, Andra Teten; Valle, Linda Anne

    2018-01-01

    The Division of Violence Prevention within CDC’s National Center for Injury Prevention and Control recently undertook a systematic review of primary prevention strategies for sexual violence (SV) perpetration. This review identified the lack of community-level strategies to prevent SV as a critical gap in the literature. Community-level strategies function by modifying the characteristics of settings (e.g., schools, workplaces, neighborhoods) that increase the risk for violence victimization and perpetration. Identification of evidence-based strategies at the community level would allow implementation of ecologic approaches to SV prevention with a greater potential for reducing the prevalence of SV perpetration. The field will face several challenges in identifying and evaluating the effectiveness of promising community-level strategies to prevent SV. These challenges include limited knowledge of community-level and societal-level risk factors for SV, a lack of theoretical or empirical guidance in the SV literature for identification of promising community-level approaches, and challenges in evaluating SV outcomes at the community level. Recognition of these challenges should guide future research and foster dialogue within the SV prevention field. The development and evaluation of community-level approaches to SV prevention represent a vital and logical next step toward the implementation of effective, multilevel prevention efforts and a population-level reduction in the prevalence of SV. PMID:22185587

  4. A hybrid approach of gene sets and single genes for the prediction of survival risks with gene expression data.

    PubMed

    Seok, Junhee; Davis, Ronald W; Xiao, Wenzhong

    2015-01-01

    Accumulated biological knowledge is often encoded as gene sets, collections of genes associated with similar biological functions or pathways. The use of gene sets in the analyses of high-throughput gene expression data has been intensively studied and applied in clinical research. However, the main interest remains in finding modules of biological knowledge, or corresponding gene sets, significantly associated with disease conditions. Risk prediction from censored survival times using gene sets hasn't been well studied. In this work, we propose a hybrid method that uses both single gene and gene set information together to predict patient survival risks from gene expression profiles. In the proposed method, gene sets provide context-level information that is poorly reflected by single genes. Complementarily, single genes help to supplement incomplete information of gene sets due to our imperfect biomedical knowledge. Through the tests over multiple data sets of cancer and trauma injury, the proposed method showed robust and improved performance compared with the conventional approaches with only single genes or gene sets solely. Additionally, we examined the prediction result in the trauma injury data, and showed that the modules of biological knowledge used in the prediction by the proposed method were highly interpretable in biology. A wide range of survival prediction problems in clinical genomics is expected to benefit from the use of biological knowledge.

  5. A Hybrid Approach of Gene Sets and Single Genes for the Prediction of Survival Risks with Gene Expression Data

    PubMed Central

    Seok, Junhee; Davis, Ronald W.; Xiao, Wenzhong

    2015-01-01

    Accumulated biological knowledge is often encoded as gene sets, collections of genes associated with similar biological functions or pathways. The use of gene sets in the analyses of high-throughput gene expression data has been intensively studied and applied in clinical research. However, the main interest remains in finding modules of biological knowledge, or corresponding gene sets, significantly associated with disease conditions. Risk prediction from censored survival times using gene sets hasn’t been well studied. In this work, we propose a hybrid method that uses both single gene and gene set information together to predict patient survival risks from gene expression profiles. In the proposed method, gene sets provide context-level information that is poorly reflected by single genes. Complementarily, single genes help to supplement incomplete information of gene sets due to our imperfect biomedical knowledge. Through the tests over multiple data sets of cancer and trauma injury, the proposed method showed robust and improved performance compared with the conventional approaches with only single genes or gene sets solely. Additionally, we examined the prediction result in the trauma injury data, and showed that the modules of biological knowledge used in the prediction by the proposed method were highly interpretable in biology. A wide range of survival prediction problems in clinical genomics is expected to benefit from the use of biological knowledge. PMID:25933378

  6. Approaches, tools and methods used for setting priorities in health research in the 21st century

    PubMed Central

    Yoshida, Sachiyo

    2016-01-01

    Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods. PMID:26401271

  7. Approaches, tools and methods used for setting priorities in health research in the 21(st) century.

    PubMed

    Yoshida, Sachiyo

    2016-06-01

    Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods.

  8. Patients' satisfaction ratings and their desire for care improvement across oncology settings from France, Italy, Poland and Sweden.

    PubMed

    Brédart, A; Robertson, C; Razavi, D; Batel-Copel, L; Larsson, G; Lichosik, D; Meyza, J; Schraub, S; von Essen, L; de Haes, J C J M

    2003-01-01

    There has been an increasing interest in patient satisfaction assessment across nations recently. This paper reports on a cross-cultural comparison of the comprehensive assessment of satisfaction with care (CASC) response scales. We investigated what proportion of patients wanted care improvement for the same level of satisfaction across samples from oncology settings in France, Italy, Poland and Sweden, and whether age, gender, education level and type of items affected the relationships found. The CASC addresses patient's satisfaction with the care received in oncology hospitals. Patients are invited to rate aspects of care and to mention for each of these aspects, whether they would want improvement.One hundred and forty, 395, 186 and 133 consecutive patients were approached in oncology settings from France, Italy, Poland and Sweden, respectively. Across country settings, an increasing percentage of patients wanted care improvement for decreasing levels of satisfaction. However, in France a higher percentage of patients wanted care improvement for high-satisfaction ratings whereas in Poland a lower percentage of patients wanted care improvement for low-satisfaction ratings. Age and education level had a similar effect across countries. Confronting levels of satisfaction with desire for care improvement appeared useful in comprehending the meaning of response choice labels for the CASC across oncology settings from different linguistic and cultural background. Linguistic or socio-cultural differences were suggested for explaining discrepancies between countries. Copyright 2002 John Wiley & Sons, Ltd.

  9. Bayesian Decision Support

    NASA Astrophysics Data System (ADS)

    Berliner, M.

    2017-12-01

    Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.

  10. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  11. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  12. A global × global test for testing associations between two large sets of variables.

    PubMed

    Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J

    2017-01-01

    In high-dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high-dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Comparative genomic analysis by microbial COGs self-attraction rate.

    PubMed

    Santoni, Daniele; Romano-Spica, Vincenzo

    2009-06-21

    Whole genome analysis provides new perspectives to determine phylogenetic relationships among microorganisms. The availability of whole nucleotide sequences allows different levels of comparison among genomes by several approaches. In this work, self-attraction rates were considered for each cluster of orthologous groups of proteins (COGs) class in order to analyse gene aggregation levels in physical maps. Phylogenetic relationships among microorganisms were obtained by comparing self-attraction coefficients. Eighteen-dimensional vectors were computed for a set of 168 completely sequenced microbial genomes (19 archea, 149 bacteria). The components of the vector represent the aggregation rate of the genes belonging to each of 18 COGs classes. Genes involved in nonessential functions or related to environmental conditions showed the highest aggregation rates. On the contrary genes involved in basic cellular tasks showed a more uniform distribution along the genome, except for translation genes. Self-attraction clustering approach allowed classification of Proteobacteria, Bacilli and other species belonging to Firmicutes. Rearrangement and Lateral Gene Transfer events may influence divergences from classical taxonomy. Each set of COG classes' aggregation values represents an intrinsic property of the microbial genome. This novel approach provides a new point of view for whole genome analysis and bacterial characterization.

  14. An approach to integrating interprofessional education in collaborative mental health care.

    PubMed

    Curran, Vernon; Heath, Olga; Adey, Tanis; Callahan, Terrance; Craig, David; Hearn, Taryn; White, Hubert; Hollett, Ann

    2012-03-01

    This article describes an evaluation of a curriculum approach to integrating interprofessional education (IPE) in collaborative mental health practice across the pre- to post-licensure continuum of medical education. A systematic evaluation of IPE activities was conducted, utilizing a combination of evaluation study designs, including: pretest-posttest control group; one-group pre-test-post-test; and one-shot case study. Participant satisfaction, attitudes toward teamwork, and self-reported teamwork abilities were key evaluative outcome measures. IPE in collaborative mental health practice was well received at both the pre- and post-licensure levels. Satisfaction scores were very high, and students, trainees, and practitioners welcomed the opportunity to learn about collaboration in the context of mental health. Medical student satisfaction increased significantly with the introduction of standardized patients (SPs) as an interprofessional learning method. Medical students and faculty reported that experiential learning in practice-based settings is a key component of effective approaches to IPE implementation. At a post-licensure level, practitioners reported significant improvement in attitudes toward interprofessional collaboration in mental health care after participation in IPE. IPE in collaborative mental health is feasible, and mental health settings offer practical and useful learning experiences for students, trainees, and practitioners in interprofessional collaboration.

  15. What carries a mediation process? Configural analysis of mediation.

    PubMed

    von Eye, Alexander; Mun, Eun Young; Mair, Patrick

    2009-09-01

    Mediation is a process that links a predictor and a criterion via a mediator variable. Mediation can be full or partial. This well-established definition operates at the level of variables even if they are categorical. In this article, two new approaches to the analysis of mediation are proposed. Both of these approaches focus on the analysis of categorical variables. The first involves mediation analysis at the level of configurations instead of variables. Thus, mediation can be incorporated into the arsenal of methods of analysis for person-oriented research. Second, it is proposed that Configural Frequency Analysis (CFA) can be used for both exploration and confirmation of mediation relationships among categorical variables. The implications of using CFA are first that mediation hypotheses can be tested at the level of individual configurations instead of variables. Second, this approach leaves the door open for different types of mediation processes to exist within the same set. Using a data example, it is illustrated that aggregate-level analysis can overlook mediation processes that operate at the level of individual configurations.

  16. Machine learning algorithms for modeling groundwater level changes in agricultural regions of the U.S.

    DOE PAGES

    Sahoo, S.; Russo, T. A.; Elliott, J.; ...

    2017-05-13

    Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less

  17. Machine learning algorithms for modeling groundwater level changes in agricultural regions of the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahoo, S.; Russo, T. A.; Elliott, J.

    Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less

  18. Predicting Success in ISCS Level II.

    ERIC Educational Resources Information Center

    McDuffie, Thomas E., Jr.

    1979-01-01

    Investigates a method to predict best and least suited students for the ISCS instructional approach. Aptitude-treatment interactions associated with ISCS instruction and a set of aptitude, attitude, and skill factors were utilized to make and verify predictions on two dependent variables--achievement and success. (Author/GA)

  19. Machine learning for real time remote detection

    NASA Astrophysics Data System (ADS)

    Labbé, Benjamin; Fournier, Jérôme; Henaff, Gilles; Bascle, Bénédicte; Canu, Stéphane

    2010-10-01

    Infrared systems are key to providing enhanced capability to military forces such as automatic control of threats and prevention from air, naval and ground attacks. Key requirements for such a system to produce operational benefits are real-time processing as well as high efficiency in terms of detection and false alarm rate. These are serious issues since the system must deal with a large number of objects and categories to be recognized (small vehicles, armored vehicles, planes, buildings, etc.). Statistical learning based algorithms are promising candidates to meet these requirements when using selected discriminant features and real-time implementation. This paper proposes a new decision architecture benefiting from recent advances in machine learning by using an effective method for level set estimation. While building decision function, the proposed approach performs variable selection based on a discriminative criterion. Moreover, the use of level set makes it possible to manage rejection of unknown or ambiguous objects thus preserving the false alarm rate. Experimental evidences reported on real world infrared images demonstrate the validity of our approach.

  20. A biochemical approach to identifying microRNA targets

    PubMed Central

    Karginov, Fedor V.; Conaco, Cecilia; Xuan, Zhenyu; Schmidt, Bryan H.; Parker, Joel S.; Mandel, Gail; Hannon, Gregory J.

    2007-01-01

    Identifying the downstream targets of microRNAs (miRNAs) is essential to understanding cellular regulatory networks. We devised a direct biochemical method for miRNA target discovery that combined RNA-induced silencing complex (RISC) purification with microarray analysis of bound mRNAs. Because targets of miR-124a have been analyzed, we chose it as our model. We honed our approach both by examining the determinants of stable binding between RISC and synthetic target RNAs in vitro and by determining the dependency of both repression and RISC coimmunoprecipitation on miR-124a seed sites in two of its well characterized targets in vivo. Examining the complete spectrum of miR-124 targets in 293 cells yielded both a set that were down-regulated at the mRNA level, as previously observed, and a set whose mRNA levels were unaffected by miR-124a. Reporter assays validated both classes, extending the spectrum of mRNA targets that can be experimentally linked to the miRNA pathway. PMID:18042700

  1. Quantifying the probability of record-setting heat events in the historical record and at different levels of climate forcing

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, N. S.

    2017-12-01

    Severe heat provides one of the most direct, acute, and rapidly changing impacts of climate on people and ecostystems. Theory, historical observations, and climate model simulations all suggest that global warming should increase the probability of hot events that fall outside of our historical experience. Given the acutre impacts of extreme heat, quantifying the probability of historically unprecedented hot events at different levels of climate forcing is critical for climate adaptation and mitigation decisions. However, in practice that quantification presents a number of methodological challenges. This presentation will review those methodological challenges, including the limitations of the observational record and of climate model fidelity. The presentation will detail a comprehensive approach to addressing these challenges. It will then demonstrate the application of that approach to quantifying uncertainty in the probability of record-setting hot events in the current climate, as well as periods with lower and higher greenhouse gas concentrations than the present.

  2. Synthesis and Control of Flexible Systems with Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Lim, Kyong B.

    2009-01-01

    An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.

  3. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardisty, M.; Gordon, L.; Agarwal, P.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less

  4. Setting action levels for drinking water: are we protecting our health or our economy (or our backs!)?

    PubMed

    Reimann, Clemens; Banks, David

    2004-10-01

    Clean and healthy drinking water is important for life. Drinking water can be drawn from streams, lakes and rivers, directly collected (and stored) from rain, acquired by desalination of ocean water and melting of ice or it can be extracted from groundwater resources. Groundwater may reach the earth's surface in the form of springs or can be extracted via dug or drilled wells; it also contributes significantly to river baseflow. Different water quality issues have to be faced when utilising these different water resources. Some of these are at present largely neglected in water quality regulations. This paper focuses on the inorganic chemical quality of natural groundwater. Possible health effects, the problems of setting meaningful action levels or maximum admissible concentrations (MAC-values) for drinking water, and potential shortcomings in current legislation are discussed. An approach to setting action levels based on transparency, toxicological risk assessment, completeness, and identifiable responsibility is suggested.

  5. Wavelet energy-guided level set-based active contour: a segmentation method to segment highly similar regions.

    PubMed

    Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi

    2010-07-01

    This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.

  6. Benchmarking Hydrogen and Carbon NMR Chemical Shifts at HF, DFT, and MP2 Levels.

    PubMed

    Flaig, Denis; Maurer, Marina; Hanni, Matti; Braunger, Katharina; Kick, Leonhard; Thubauville, Matthias; Ochsenfeld, Christian

    2014-02-11

    An extensive study of error distributions for calculating hydrogen and carbon NMR chemical shifts at Hartree-Fock (HF), density functional theory (DFT), and Møller-Plesset second-order perturbation theory (MP2) levels is presented. Our investigation employs accurate CCSD(T)/cc-pVQZ calculations for providing reference data for 48 hydrogen and 40 carbon nuclei within an extended set of chemical compounds covering a broad range of the NMR scale with high relevance to chemical applications, especially in organic chemistry. Besides the approximations of HF, a variety of DFT functionals, and conventional MP2, we also present results with respect to a spin component-scaled MP2 (GIAO-SCS-MP2) approach. For each method, the accuracy is analyzed in detail for various basis sets, allowing identification of efficient combinations of method and basis set approximations.

  7. Measuring and Specifying Combinatorial Coverage of Test Input Configurations

    PubMed Central

    Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu

    2015-01-01

    A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442

  8. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  9. Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, R.; Passerini, S.; Vilim, R. B.

    In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less

  10. Using a 'value-added' approach for contextual design of geographic information.

    PubMed

    May, Andrew J

    2013-11-01

    The aim of this article is to demonstrate how a 'value-added' approach can be used for user-centred design of geographic information. An information science perspective was used, with value being the difference in outcomes arising from alternative information sets. Sixteen drivers navigated a complex, unfamiliar urban route, using visual and verbal instructions representing the distance-to-turn and junction layout information presented by typical satellite navigation systems. Data measuring driving errors, navigation errors and driver confidence were collected throughout the trial. The results show how driver performance varied considerably according to the geographic context at specific locations, and that there are specific opportunities to add value with enhanced geographical information. The conclusions are that a value-added approach facilitates a more explicit focus on 'desired' (and feasible) levels of end user performance with different information sets, and is a potentially effective approach to user-centred design of geographic information. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Optimizing Monitoring Designs under Alternative Objectives

    DOE PAGES

    Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...

    2014-12-31

    This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less

  12. A cultural setting where the other-race effect on face recognition has no social-motivational component and derives entirely from lifetime perceptual experience.

    PubMed

    Wan, Lulu; Crookes, Kate; Reynolds, Katherine J; Irons, Jessica L; McKone, Elinor

    2015-11-01

    Competing approaches to the other-race effect (ORE) see its primary cause as either a lack of motivation to individuate social outgroup members, or a lack of perceptual experience with other-race faces. Here, we argue that the evidence supporting the social-motivational approach derives from a particular cultural setting: a high socio-economic status group (typically US Whites) looking at the faces of a lower status group (US Blacks) with whom observers typically have at least moderate perceptual experience. In contrast, we test motivation-to-individuate instructions across five studies covering an extremely wide range of perceptual experience, in a cultural setting of more equal socio-economic status, namely Asian and Caucasian participants (N = 480) tested on Asian and Caucasian faces. We find no social-motivational component at all to the ORE, specifically: no reduction in the ORE with motivation instructions, including for novel images of the faces, and at all experience levels; no increase in correlation between own- and other-race face recognition, implying no increase in shared processes; and greater (not the predicted less) effort applied to distinguishing other-race faces than own-race faces under normal ("no instructions") conditions. Instead, the ORE was predicted by level of contact with the other-race. Our results reject both pure social-motivational theories and also the recent Categorization-Individuation model of Hugenberg, Young, Bernstein, and Sacco (2010). We propose a new dual-route approach to the ORE, in which there are two causes of the ORE-lack of motivation, and lack of experience--that contribute differently across varying world locations and cultural settings. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Classification of Complete Proteomes of Different Organisms and Protein Sets Based on Their Protein Distributions in Terms of Some Key Attributes of Proteins

    DOE PAGES

    Guo, Hao-Bo; Ma, Yue; Tuskan, Gerald A.; ...

    2018-01-01

    The existence of complete genome sequences makes it important to develop different approaches for classification of large-scale data sets and to make extraction of biological insights easier. Here, we propose an approach for classification of complete proteomes/protein sets based on protein distributions on some basic attributes. We demonstrate the usefulness of this approach by determining protein distributions in terms of two attributes: protein lengths and protein intrinsic disorder contents (ID). The protein distributions based on L and ID are surveyed for representative proteome organisms and protein sets from the three domains of life. The two-dimensional maps (designated as fingerprints here)more » from the protein distribution densities in the LD space defined by ln( L ) and ID are then constructed. The fingerprints for different organisms and protein sets are found to be distinct with each other, and they can therefore be used for comparative studies. As a test case, phylogenetic trees have been constructed based on the protein distribution densities in the fingerprints of proteomes of organisms without performing any protein sequence comparison and alignments. The phylogenetic trees generated are biologically meaningful, demonstrating that the protein distributions in the LD space may serve as unique phylogenetic signals of the organisms at the proteome level.« less

  14. Classification of Complete Proteomes of Different Organisms and Protein Sets Based on Their Protein Distributions in Terms of Some Key Attributes of Proteins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hao-Bo; Ma, Yue; Tuskan, Gerald A.

    The existence of complete genome sequences makes it important to develop different approaches for classification of large-scale data sets and to make extraction of biological insights easier. Here, we propose an approach for classification of complete proteomes/protein sets based on protein distributions on some basic attributes. We demonstrate the usefulness of this approach by determining protein distributions in terms of two attributes: protein lengths and protein intrinsic disorder contents (ID). The protein distributions based on L and ID are surveyed for representative proteome organisms and protein sets from the three domains of life. The two-dimensional maps (designated as fingerprints here)more » from the protein distribution densities in the LD space defined by ln( L ) and ID are then constructed. The fingerprints for different organisms and protein sets are found to be distinct with each other, and they can therefore be used for comparative studies. As a test case, phylogenetic trees have been constructed based on the protein distribution densities in the fingerprints of proteomes of organisms without performing any protein sequence comparison and alignments. The phylogenetic trees generated are biologically meaningful, demonstrating that the protein distributions in the LD space may serve as unique phylogenetic signals of the organisms at the proteome level.« less

  15. Population-level interventions to reduce alcohol-related harm: an overview of systematic reviews.

    PubMed

    Martineau, Fred; Tyner, Elizabeth; Lorenc, Theo; Petticrew, Mark; Lock, Karen

    2013-10-01

    To analyse available review-level evidence on the effectiveness of population-level interventions in non-clinical settings to reduce alcohol consumption or related health or social harm. Health, social policy and specialist review databases between 2002 and 2012 were searched for systematic reviews of the effectiveness of population-level alcohol interventions on consumption or alcohol-related health or social outcomes. Data were extracted on review research aim, inclusion criteria, outcome indicators, results, conclusions and limitations. Reviews were quality-assessed using AMSTAR criteria. A narrative synthesis was conducted overall and by policy area. Fifty-two reviews were included from ten policy areas. There is good evidence for policies and interventions to limit alcohol sale availability, to reduce drink-driving, to increase alcohol price or taxation. There is mixed evidence for family- and community-level interventions, school-based interventions, and interventions in the alcohol server setting and the mass media. There is weak evidence for workplace interventions and for interventions targeting illicit alcohol sales. There is evidence of the ineffectiveness of interventions in higher education settings. There is a pattern of support from the evidence base for regulatory or statutory enforcement interventions over local non-regulatory approaches targeting specific population groups. © 2013.

  16. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  17. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE: Empowering the Attenuation and Distortion Concept by Plomp With a Quantitative Processing Model.

    PubMed

    Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas

    2016-09-07

    To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.

  18. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  19. Assembling Appliances Standards from a Basket of Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siderious, Hans-Paul; Meier, Alan

    2014-08-11

    Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less

  20. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  1. Predicting race performance in triathlon: the role of perfectionism, achievement goals, and personal goal setting.

    PubMed

    Stoeber, Joachim; Uphill, Mark A; Hotham, Sarah

    2009-04-01

    The question of how perfectionism affects performance is highly debated. Because empirical studies examining perfectionism and competitive sport performance are missing, the present research investigated how perfectionism affected race performance and what role athletes' goals played in this relationship in two prospective studies with competitive triathletes (Study 1: N = 112; Study 2: N = 321). Regression analyses showed that perfectionistic personal standards, high performance-approach goals, low performance-avoidance goals, and high personal goals predicted race performance beyond athletes' performance level. Moreover, the contrast between performance-avoidance and performance-approach goals mediated the relationship between perfectionistic personal standards and performance, whereas personal goal setting mediated the relationship between performance-approach goals and performance. The findings indicate that perfectionistic personal standards do not undermine competitive performance, but are associated with goals that help athletes achieve their best possible performance.

  2. Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.

    PubMed

    van Lent, Wineke A M; Goedbloed, N; van Harten, W H

    2009-03-01

    To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.

  3. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  4. Usability testing in medical informatics: cognitive approaches to evaluation of information systems and user interfaces.

    PubMed Central

    Kushniruk, A. W.; Patel, V. L.; Cimino, J. J.

    1997-01-01

    This paper describes an approach to the evaluation of health care information technologies based on usability engineering and a methodological framework from the study of medical cognition. The approach involves collection of a rich set of data including video recording of health care workers as they interact with systems, such as computerized patient records and decision support tools. The methodology can be applied in the laboratory setting, typically involving subjects "thinking aloud" as they interact with a system. A similar approach to data collection and analysis can also be extended to study of computer systems in the "live" environment of hospital clinics. Our approach is also influenced from work in the area of cognitive task analysis, which aims to characterize the decision making and reasoning of subjects of varied levels of expertise as they interact with information technology in carrying out representative tasks. The stages involved in conducting cognitively-based usability analyses are detailed and the application of such analysis in the iterative process of system and interface development is discussed. PMID:9357620

  5. Defining acceptable levels for ecological indicators: an approach for considering social values.

    PubMed

    Smyth, Robyn L; Watzin, Mary C; Manning, Robert E

    2007-03-01

    Ecological indicators can facilitate an adaptive management approach, but only if acceptable levels for those indicators have been defined so that the data collected can be interpreted. Because acceptable levels are an expression of the desired state of the ecosystem, the process of establishing acceptable levels should incorporate not just ecological understanding but also societal values. The goal of this research was to explore an approach for defining acceptable levels of ecological indicators that explicitly considers social perspectives and values. We used a set of eight indicators that were related to issues of concern in the Lake Champlain Basin. Our approach was based on normative theory. Using a stakeholder survey, we measured respondent normative evaluations of varying levels of our indicators. Aggregated social norm curves were used to determine the level at which indicator values shifted from acceptable to unacceptable conditions. For seven of the eight indicators, clear preferences were interpretable from these norm curves. For example, closures of public beaches because of bacterial contamination and days of intense algae bloom went from acceptable to unacceptable at 7-10 days in a summer season. Survey respondents also indicated that the number of fish caught from Lake Champlain that could be safely consumed each month was unacceptably low and the number of streams draining into the lake that were impaired by storm water was unacceptably high. If indicators that translate ecological conditions into social consequences are carefully selected, we believe the normative approach has considerable merit for defining acceptable levels of valued ecological system components.

  6. Defining Acceptable Levels for Ecological Indicators: An Approach for Considering Social Values

    NASA Astrophysics Data System (ADS)

    Smyth, Robyn L.; Watzin, Mary C.; Manning, Robert E.

    2007-03-01

    Ecological indicators can facilitate an adaptive management approach, but only if acceptable levels for those indicators have been defined so that the data collected can be interpreted. Because acceptable levels are an expression of the desired state of the ecosystem, the process of establishing acceptable levels should incorporate not just ecological understanding but also societal values. The goal of this research was to explore an approach for defining acceptable levels of ecological indicators that explicitly considers social perspectives and values. We used a set of eight indicators that were related to issues of concern in the Lake Champlain Basin. Our approach was based on normative theory. Using a stakeholder survey, we measured respondent normative evaluations of varying levels of our indicators. Aggregated social norm curves were used to determine the level at which indicator values shifted from acceptable to unacceptable conditions. For seven of the eight indicators, clear preferences were interpretable from these norm curves. For example, closures of public beaches because of bacterial contamination and days of intense algae bloom went from acceptable to unacceptable at 7-10 days in a summer season. Survey respondents also indicated that the number of fish caught from Lake Champlain that could be safely consumed each month was unacceptably low and the number of streams draining into the lake that were impaired by storm water was unacceptably high. If indicators that translate ecological conditions into social consequences are carefully selected, we believe the normative approach has considerable merit for defining acceptable levels of valued ecological system components.

  7. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  8. Accountable priority setting for trust in health systems--the need for research into a new approach for strengthening sustainable health action in developing countries.

    PubMed

    Byskov, Jens; Bloch, Paul; Blystad, Astrid; Hurtig, Anna-Karin; Fylkesnes, Knut; Kamuzora, Peter; Kombe, Yeri; Kvåle, Gunnar; Marchal, Bruno; Martin, Douglas K; Michelo, Charles; Ndawi, Benedict; Ngulube, Thabale J; Nyamongo, Isaac; Olsen, Oystein E; Onyango-Ouma, Washington; Sandøy, Ingvild F; Shayo, Elizabeth H; Silwamba, Gavin; Songstad, Nils Gunnar; Tuba, Mary

    2009-10-24

    Despite multiple efforts to strengthen health systems in low and middle income countries, intended sustainable improvements in health outcomes have not been shown. To date most priority setting initiatives in health systems have mainly focused on technical approaches involving information derived from burden of disease statistics, cost effectiveness analysis, and published clinical trials. However, priority setting involves value-laden choices and these technical approaches do not equip decision-makers to address a broader range of relevant values - such as trust, equity, accountability and fairness - that are of concern to other partners and, not least, the populations concerned. A new focus for priority setting is needed.Accountability for Reasonableness (AFR) is an explicit ethical framework for legitimate and fair priority setting that provides guidance for decision-makers who must identify and consider the full range of relevant values. AFR consists of four conditions: i) relevance to the local setting, decided by agreed criteria; ii) publicizing priority-setting decisions and the reasons behind them; iii) the establishment of revisions/appeal mechanisms for challenging and revising decisions; iv) the provision of leadership to ensure that the first three conditions are met.REACT - "REsponse to ACcountable priority setting for Trust in health systems" is an EU-funded five-year intervention study started in 2006, which is testing the application and effects of the AFR approach in one district each in Kenya, Tanzania and Zambia. The objectives of REACT are to describe and evaluate district-level priority setting, to develop and implement improvement strategies guided by AFR and to measure their effect on quality, equity and trust indicators. Effects are monitored within selected disease and programme interventions and services and within human resources and health systems management. Qualitative and quantitative methods are being applied in an action research framework to examine the potential of AFR to support sustainable improvements to health systems performance.This paper reports on the project design and progress and argues that there is a high need for research into legitimate and fair priority setting to improve the knowledge base for achieving sustainable improvements in health outcomes.

  9. Integrating multiple molecular sources into a clinical risk prediction signature by extracting complementary information.

    PubMed

    Hieke, Stefanie; Benner, Axel; Schlenl, Richard F; Schumacher, Martin; Bullinger, Lars; Binder, Harald

    2016-08-30

    High-throughput technology allows for genome-wide measurements at different molecular levels for the same patient, e.g. single nucleotide polymorphisms (SNPs) and gene expression. Correspondingly, it might be beneficial to also integrate complementary information from different molecular levels when building multivariable risk prediction models for a clinical endpoint, such as treatment response or survival. Unfortunately, such a high-dimensional modeling task will often be complicated by a limited overlap of molecular measurements at different levels between patients, i.e. measurements from all molecular levels are available only for a smaller proportion of patients. We propose a sequential strategy for building clinical risk prediction models that integrate genome-wide measurements from two molecular levels in a complementary way. To deal with partial overlap, we develop an imputation approach that allows us to use all available data. This approach is investigated in two acute myeloid leukemia applications combining gene expression with either SNP or DNA methylation data. After obtaining a sparse risk prediction signature e.g. from SNP data, an automatically selected set of prognostic SNPs, by componentwise likelihood-based boosting, imputation is performed for the corresponding linear predictor by a linking model that incorporates e.g. gene expression measurements. The imputed linear predictor is then used for adjustment when building a prognostic signature from the gene expression data. For evaluation, we consider stability, as quantified by inclusion frequencies across resampling data sets. Despite an extremely small overlap in the application example with gene expression and SNPs, several genes are seen to be more stably identified when taking the (imputed) linear predictor from the SNP data into account. In the application with gene expression and DNA methylation, prediction performance with respect to survival also indicates that the proposed approach might work well. We consider imputation of linear predictor values to be a feasible and sensible approach for dealing with partial overlap in complementary integrative analysis of molecular measurements at different levels. More generally, these results indicate that a complementary strategy for integrating different molecular levels can result in more stable risk prediction signatures, potentially providing a more reliable insight into the underlying biology.

  10. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  11. User-Driven Quality Certification of Workplace Software, the UsersAward Experience

    DTIC Science & Technology

    2004-06-01

    the set of criteria and the chosen level of approval was sufficiently balanced . Furthermore, the fact that both software providers experienced... Worklife - Building Social Capacity - European Approaches, Edition sigma Berlin. Lind, T. (2002). IT-kartan, användare och IT-system i svenskt

  12. Advances in Learning Processes

    ERIC Educational Resources Information Center

    Rosson, Mary Beth, Ed.

    2010-01-01

    Readers will find several papers that address high-level issues in the use of technology in education, for example architecture and design frameworks for building online education materials or tools. Several other chapters report novel approaches to intelligent tutors or adaptive systems in educational settings. A number of chapters consider many…

  13. Using Multilevel Modeling in Language Assessment Research: A Conceptual Introduction

    ERIC Educational Resources Information Center

    Barkaoui, Khaled

    2013-01-01

    This article critiques traditional single-level statistical approaches (e.g., multiple regression analysis) to examining relationships between language test scores and variables in the assessment setting. It highlights the conceptual, methodological, and statistical problems associated with these techniques in dealing with multilevel or nested…

  14. Costing for Policy Analysis.

    ERIC Educational Resources Information Center

    National Association of College and University Business Officers, Washington, DC.

    Cost behavior analysis, a costing process that can assist managers in estimating how certain institutional costs change in response to volume, policy, and environmental factors, is described. The five steps of this approach are examined, and the application of cost behavior analysis at four college-level settings is documented. The institutions…

  15. Exploring Transmedia: The Rip-Mix-Learn Classroom

    ERIC Educational Resources Information Center

    Benedict, Lucille A.; Champlin, David T.; Pence, Harry E.

    2013-01-01

    Google Docs was used to create the rip-mix-learn (RML) classroom in two, first-year undergraduate introductory chemistry and biology courses, a second-semester introductory chemistry course, and an upper-level developmental biology course. This "transmedia" approach assigned students to create sets of collaborative lecture notes into…

  16. Thresholding functional connectomes by means of mixture modeling.

    PubMed

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  17. The relational neurobehavioral approach: can a non-aversive program manage adults with brain injury-related aggression without seclusion/restraint?

    PubMed

    Kalapatapu, Raj K; Giles, Gordon M

    2017-11-01

    The Relational Neurobehavioral Approach (RNA) is a set of non-aversive intervention methods to manage individuals with brain injury-related aggression. New data on interventions used in the RNA and on how the RNA interventions can be used with patients with acquired brain injury (ABI) who have differing levels of functional impairment are provided in this paper. The study was conducted over a 6-week period in a secure 65-bed program for individuals with ABI that is housed in two units of a skilled nursing facility (SNF). Implementation of the RNA was compared between two units that housed patients with differing levels of functional impairment (n = 65 adults). Since this was a hierarchical clustered dataset, Generalized Estimating Equations regression was used in the analyses. RNA interventions used to manage the 495 aggressive incidents included the following: Aggression ignored, Closer observation, Talking to patient, Reassurance, Physical distraction, Isolation without seclusion, Immediate medication by mouth, Holding patient. Different interventions were implemented differentially by staff based on level of functional impairment and without use of seclusion or mechanical restraint. The RNA can be used to non-aversively manage aggression in patients with brain injury and with differing levels of functional impairment. Programs adopting the RNA can potentially manage brain injury-related aggression without seclusion or mechanical restraint. Implications for Rehabilitation The Relational Neurobehavioral Approach (RNA) is a set of non-aversive intervention methods to manage individuals with brain injury-related aggression. RNA methods can be used to manage aggression in patients with brain injury who have differing levels of functional impairment. Successful implementation of the RNA may allow for the management of brain injury-related aggression without seclusion or mechanical restraint.

  18. Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines

    PubMed Central

    Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram

    2014-01-01

    When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002

  19. Prediction of vein connectivity using the percolation approach: model test with field data

    NASA Astrophysics Data System (ADS)

    Belayneh, M.; Masihi, M.; Matthäi, S. K.; King, P. R.

    2006-09-01

    Evaluating the uncertainty in fracture connectivity and its effect on the flow behaviour of natural fracture networks formed under in situ conditions is an extremely difficult task. One widely used probabilistic approach is to use percolation theory, which is well adapted to estimate the connectivity and conductivity of geometrical objects near the percolation threshold. In this paper, we apply scaling laws from percolation theory to predict the connectivity of vein sets exposed on the southern margin of the Bristol Channel Basin. Two vein sets in a limestone bed interbedded with shales on the limb of a rollover fold were analysed for length, spacing and aperture distributions. Eight scan lines, low-level aerial photographs and mosaics of photographs taken with a tripod were used. The analysed veins formed contemporaneously with the rollover fold during basin subsidence on the hanging wall of a listric normal fault. The first vein set, V1, is fold axis-parallel (i.e. striking ~100°) and normal to bedding. The second vein set, V2, strikes 140° and crosscuts V1. We find a close agreement in connectivity between our predictions using the percolation approach and the field data. The implication is that reasonable predictions of vein connectivity can be made from sparse data obtained from boreholes or (limited) sporadic outcrop.

  20. Integration at the round table: marine spatial planning in multi-stakeholder settings.

    PubMed

    Olsen, Erik; Fluharty, David; Hoel, Alf Håkon; Hostens, Kristian; Maes, Frank; Pecceu, Ellen

    2014-01-01

    Marine spatial planning (MSP) is often considered as a pragmatic approach to implement an ecosystem based management in order to manage marine space in a sustainable way. This requires the involvement of multiple actors and stakeholders at various governmental and societal levels. Several factors affect how well the integrated management of marine waters will be achieved, such as different governance settings (division of power between central and local governments), economic activities (and related priorities), external drivers, spatial scales, incentives and objectives, varying approaches to legislation and political will. We compared MSP in Belgium, Norway and the US to illustrate how the integration of stakeholders and governmental levels differs among these countries along the factors mentioned above. Horizontal integration (between sectors) is successful in all three countries, achieved through the use of neutral 'round-table' meeting places for all actors. Vertical integration between government levels varies, with Belgium and Norway having achieved full integration while the US lacks integration of the legislature due to sharp disagreements among stakeholders and unsuccessful partisan leadership. Success factors include political will and leadership, process transparency and stakeholder participation, and should be considered in all MSP development processes.

  1. Integration at the Round Table: Marine Spatial Planning in Multi-Stakeholder Settings

    PubMed Central

    Olsen, Erik; Fluharty, David; Hoel, Alf Håkon; Hostens, Kristian; Maes, Frank; Pecceu, Ellen

    2014-01-01

    Marine spatial planning (MSP) is often considered as a pragmatic approach to implement an ecosystem based management in order to manage marine space in a sustainable way. This requires the involvement of multiple actors and stakeholders at various governmental and societal levels. Several factors affect how well the integrated management of marine waters will be achieved, such as different governance settings (division of power between central and local governments), economic activities (and related priorities), external drivers, spatial scales, incentives and objectives, varying approaches to legislation and political will. We compared MSP in Belgium, Norway and the US to illustrate how the integration of stakeholders and governmental levels differs among these countries along the factors mentioned above. Horizontal integration (between sectors) is successful in all three countries, achieved through the use of neutral ‘round-table’ meeting places for all actors. Vertical integration between government levels varies, with Belgium and Norway having achieved full integration while the US lacks integration of the legislature due to sharp disagreements among stakeholders and unsuccessful partisan leadership. Success factors include political will and leadership, process transparency and stakeholder participation, and should be considered in all MSP development processes. PMID:25299595

  2. Conflict resolution styles in the nursing profession.

    PubMed

    Losa Iglesias, Marta Elena; Becerro de Bengoa Vallejo, Ricardo

    2012-12-01

    Managers, including those in nursing environments, may spend much of their time addressing employee conflicts. If not handled properly, conflict may significantly affect employee morale, increase turnover, and even result in litigation, ultimately affecting the overall well-being of the organization. A clearer understanding of the factors that underlie conflict resolution styles could lead to the promotion of better management strategies. The aim of this research was to identify the predominant conflict resolution styles used by a sample of Spanish nurses in two work settings, academic and clinical, in order to determine differences between these environments. The effects of employment level and demographic variables were explored as well. Descriptive cross-sectional survey study. Our sample consisted of professional nurses in Madrid, Spain, who worked in either a university setting or a clinical care setting. Within each of these environments, nurses worked at one of three levels: full professor, assistant professor, or scholarship professor in the academic setting; and nursing supervisor, registered staff nurse, or nursing assistant in the clinical setting. Conflict resolution style was examined using the standardized Thomas-Kilmann Conflict Mode Instrument, a dual-choice questionnaire that assesses a respondent's predominant style of conflict resolution. Five styles are defined: accommodating, avoiding, collaborating, competing, and compromising. Participants were asked to give answers that characterized their dominant response in a conflict situation involving either a superior or a subordinate. Descriptive and inferential statistics were used to examine the relationship between workplace setting and conflict resolution style. The most common style used by nurses overall to resolve workplace conflict was compromising, followed by competing, avoiding, accommodating, and collaborating. There was a significant overall difference in styles between nurses who worked in an academic vs. a clinical setting (p = 0.005), with the greatest difference seen for the accommodating style. Of those nurses for whom accommodation was the primary style, 83% worked in a clinical setting compared to just 17% in an academic setting. Further examination of the difference in conflict-solving approaches between academic and clinical nursing environments might shed light on etiologic factors, which in turn might enable nursing management to institute conflict management interventions that are tailored to specific work environments and adapted to different employment levels. This research increases our understanding of preferred approaches to handling conflict in nursing organizations.

  3. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era.

    PubMed

    Albalawi, Yousef; Sixsmith, Jane

    2015-01-01

    The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public's health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence of organizations on agenda setting is also highlighted (in the data of user interactions, organizational accounts registered 17% and 14.74% as source and target of interactions, respectively). In addition, 13 striking similarities showed the relationship between newspapers and Twitter on the mentions trends line. The effective use of social media platforms in health promotion intervention programs requires new strategies that consider the limitations of traditional communication channels. Conducting research is vital to establishing a strong basis for modifying, designing, and developing new health promotion strategies and approaches.

  4. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    PubMed Central

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence of organizations on agenda setting is also highlighted (in the data of user interactions, organizational accounts registered 17% and 14.74% as source and target of interactions, respectively). In addition, 13 striking similarities showed the relationship between newspapers and Twitter on the mentions trends line. Conclusions The effective use of social media platforms in health promotion intervention programs requires new strategies that consider the limitations of traditional communication channels. Conducting research is vital to establishing a strong basis for modifying, designing, and developing new health promotion strategies and approaches. PMID:27227139

  5. Data as textbook.

    PubMed

    Pollack, C D; Diers, D

    1996-01-01

    Hospital information systems have been collecting patient-related data systematically for years. This article describes a course developed at the Yale University School of Nursing that uses hospital data as "textbook" to teach graduate nursing students to navigate a large hospital data set, enabling a multitude of nursing questions to be addressed. The approach used in this course is easily transferrable to the practice setting as demonstrated by the authors. Through understanding patient-level data, their aggregate patterns, and overall database construction, nurses can expand their contributions to clinical practice and management.

  6. Assessing coastal wetland vulnerability to sea-level rise along the northern Gulf of Mexico coast: Gaps and opportunities for developing a coordinated regional sampling network

    PubMed Central

    Griffith, Kereen T.; Larriviere, Jack C.; Feher, Laura C.; Cahoon, Donald R.; Enwright, Nicholas M.; Oster, David A.; Tirpak, John M.; Woodrey, Mark S.; Collini, Renee C.; Baustian, Joseph J.; Breithaupt, Joshua L.; Cherry, Julia A.; Conrad, Jeremy R.; Cormier, Nicole; Coronado-Molina, Carlos A.; Donoghue, Joseph F.; Graham, Sean A.; Harper, Jennifer W.; Hester, Mark W.; Howard, Rebecca J.; Krauss, Ken W.; Kroes, Daniel E.; Lane, Robert R.; McKee, Karen L.; Mendelssohn, Irving A.; Middleton, Beth A.; Moon, Jena A.; Piazza, Sarai C.; Rankin, Nicole M.; Sklar, Fred H.; Steyer, Greg D.; Swanson, Kathleen M.; Swarzenski, Christopher M.; Vervaeke, William C.; Willis, Jonathan M.; Wilson, K. Van

    2017-01-01

    Coastal wetland responses to sea-level rise are greatly influenced by biogeomorphic processes that affect wetland surface elevation. Small changes in elevation relative to sea level can lead to comparatively large changes in ecosystem structure, function, and stability. The surface elevation table-marker horizon (SET-MH) approach is being used globally to quantify the relative contributions of processes affecting wetland elevation change. Historically, SET-MH measurements have been obtained at local scales to address site-specific research questions. However, in the face of accelerated sea-level rise, there is an increasing need for elevation change network data that can be incorporated into regional ecological models and vulnerability assessments. In particular, there is a need for long-term, high-temporal resolution data that are strategically distributed across ecologically-relevant abiotic gradients. Here, we quantify the distribution of SET-MH stations along the northern Gulf of Mexico coast (USA) across political boundaries (states), wetland habitats, and ecologically-relevant abiotic gradients (i.e., gradients in temperature, precipitation, elevation, and relative sea-level rise). Our analyses identify areas with high SET-MH station densities as well as areas with notable gaps. Salt marshes, intermediate elevations, and colder areas with high rainfall have a high number of stations, while salt flat ecosystems, certain elevation zones, the mangrove-marsh ecotone, and hypersaline coastal areas with low rainfall have fewer stations. Due to rapid rates of wetland loss and relative sea-level rise, the state of Louisiana has the most extensive SET-MH station network in the region, and we provide several recent examples where data from Louisiana’s network have been used to assess and compare wetland vulnerability to sea-level rise. Our findings represent the first attempt to examine spatial gaps in SET-MH coverage across abiotic gradients. Our analyses can be used to transform a broadly disseminated and unplanned collection of SET-MH stations into a coordinated and strategic regional network. This regional network would provide data for predicting and preparing for the responses of coastal wetlands to accelerated sea-level rise and other aspects of global change. PMID:28902904

  7. Assessing coastal wetland vulnerability to sea-level rise along the northern Gulf of Mexico coast: Gaps and opportunities for developing a coordinated regional sampling network

    USGS Publications Warehouse

    Osland, Michael J.; Griffith, Kereen T.; Larriviere, Jack C.; Feher, Laura C.; Cahoon, Donald R.; Enwright, Nicholas M.; Oster, David A.; Tirpak, John M.; Woodrey, Mark S.; Collini, Renee C.; Baustian, Joseph J.; Breithaupt, Joshua L.; Cherry, Julia A; Conrad, Jeremy R.; Cormier, Nicole; Coronado-Molina, Carlos A.; Donoghue, Joseph F.; Graham, Sean A.; Harper, Jennifer W.; Hester, Mark W.; Howard, Rebecca J.; Krauss, Ken W.; Kroes, Daniel; Lane, Robert R.; Mckee, Karen L.; Mendelssohn, Irving A.; Middleton, Beth A.; Moon, Jena A.; Piazza, Sarai; Rankin, Nicole M.; Sklar, Fred H.; Steyer, Gregory D.; Swanson, Kathleen M.; Swarzenski, Christopher M.; Vervaeke, William; Willis, Jonathan M; Van Wilson, K.

    2017-01-01

    Coastal wetland responses to sea-level rise are greatly influenced by biogeomorphic processes that affect wetland surface elevation. Small changes in elevation relative to sea level can lead to comparatively large changes in ecosystem structure, function, and stability. The surface elevation table-marker horizon (SET-MH) approach is being used globally to quantify the relative contributions of processes affecting wetland elevation change. Historically, SET-MH measurements have been obtained at local scales to address site-specific research questions. However, in the face of accelerated sea-level rise, there is an increasing need for elevation change network data that can be incorporated into regional ecological models and vulnerability assessments. In particular, there is a need for long-term, high-temporal resolution data that are strategically distributed across ecologically-relevant abiotic gradients. Here, we quantify the distribution of SET-MH stations along the northern Gulf of Mexico coast (USA) across political boundaries (states), wetland habitats, and ecologically-relevant abiotic gradients (i.e., gradients in temperature, precipitation, elevation, and relative sea-level rise). Our analyses identify areas with high SET-MH station densities as well as areas with notable gaps. Salt marshes, intermediate elevations, and colder areas with high rainfall have a high number of stations, while salt flat ecosystems, certain elevation zones, the mangrove-marsh ecotone, and hypersaline coastal areas with low rainfall have fewer stations. Due to rapid rates of wetland loss and relative sea-level rise, the state of Louisiana has the most extensive SET-MH station network in the region, and we provide several recent examples where data from Louisiana’s network have been used to assess and compare wetland vulnerability to sea-level rise. Our findings represent the first attempt to examine spatial gaps in SET-MH coverage across abiotic gradients. Our analyses can be used to transform a broadly disseminated and unplanned collection of SET-MH stations into a coordinated and strategic regional network. This regional network would provide data for predicting and preparing for the responses of coastal wetlands to accelerated sea-level rise and other aspects of global change.

  8. Assessing coastal wetland vulnerability to sea-level rise along the northern Gulf of Mexico coast: Gaps and opportunities for developing a coordinated regional sampling network.

    PubMed

    Osland, Michael J; Griffith, Kereen T; Larriviere, Jack C; Feher, Laura C; Cahoon, Donald R; Enwright, Nicholas M; Oster, David A; Tirpak, John M; Woodrey, Mark S; Collini, Renee C; Baustian, Joseph J; Breithaupt, Joshua L; Cherry, Julia A; Conrad, Jeremy R; Cormier, Nicole; Coronado-Molina, Carlos A; Donoghue, Joseph F; Graham, Sean A; Harper, Jennifer W; Hester, Mark W; Howard, Rebecca J; Krauss, Ken W; Kroes, Daniel E; Lane, Robert R; McKee, Karen L; Mendelssohn, Irving A; Middleton, Beth A; Moon, Jena A; Piazza, Sarai C; Rankin, Nicole M; Sklar, Fred H; Steyer, Greg D; Swanson, Kathleen M; Swarzenski, Christopher M; Vervaeke, William C; Willis, Jonathan M; Wilson, K Van

    2017-01-01

    Coastal wetland responses to sea-level rise are greatly influenced by biogeomorphic processes that affect wetland surface elevation. Small changes in elevation relative to sea level can lead to comparatively large changes in ecosystem structure, function, and stability. The surface elevation table-marker horizon (SET-MH) approach is being used globally to quantify the relative contributions of processes affecting wetland elevation change. Historically, SET-MH measurements have been obtained at local scales to address site-specific research questions. However, in the face of accelerated sea-level rise, there is an increasing need for elevation change network data that can be incorporated into regional ecological models and vulnerability assessments. In particular, there is a need for long-term, high-temporal resolution data that are strategically distributed across ecologically-relevant abiotic gradients. Here, we quantify the distribution of SET-MH stations along the northern Gulf of Mexico coast (USA) across political boundaries (states), wetland habitats, and ecologically-relevant abiotic gradients (i.e., gradients in temperature, precipitation, elevation, and relative sea-level rise). Our analyses identify areas with high SET-MH station densities as well as areas with notable gaps. Salt marshes, intermediate elevations, and colder areas with high rainfall have a high number of stations, while salt flat ecosystems, certain elevation zones, the mangrove-marsh ecotone, and hypersaline coastal areas with low rainfall have fewer stations. Due to rapid rates of wetland loss and relative sea-level rise, the state of Louisiana has the most extensive SET-MH station network in the region, and we provide several recent examples where data from Louisiana's network have been used to assess and compare wetland vulnerability to sea-level rise. Our findings represent the first attempt to examine spatial gaps in SET-MH coverage across abiotic gradients. Our analyses can be used to transform a broadly disseminated and unplanned collection of SET-MH stations into a coordinated and strategic regional network. This regional network would provide data for predicting and preparing for the responses of coastal wetlands to accelerated sea-level rise and other aspects of global change.

  9. Setting stroke research priorities: The consumer perspective.

    PubMed

    Sangvatanakul, Pukkaporn; Hillege, Sharon; Lalor, Erin; Levi, Christopher; Hill, Kelvin; Middleton, Sandy

    2010-12-01

    To test a method of engaging consumers in research priority-setting using a quantitative approach and to determine consumer views on stroke research priorities for clinical practice recommendations with lower levels of evidence (Level III and Level IV) and expert consensus opinion as published in the Australian stroke clinical practice guidelines. Survey Urban community Eighteen stroke survivors (n = 12) and carers (n = 6) who were members of the "Working Aged Group - Stroke" (WAGS) consumer support group. Phase I: Participants were asked whether recommendations were "worth" researching ("yes" or "no"); and, if researched, what potential impact they likely would have on patient outcomes. Phase II: Participants were asked to rank recommendations rated by more than 75% of participants in Phase I as "worth" researching and "highly likely" or "likely" to generate research with a significant effect on patient outcomes (n = 13) in order of priority for future stroke research. All recommendations were rated by at least half (n = 9, 50%) of participants as "worth" researching. The majority (67% to 100%) rated all recommendations as "highly likely" or "likely" that research would have a significant effect on patient outcomes. Thirteen out of 20 recommendations were ranked for their research priorities. Recommendations under the topic heading Getting to hospital were ranked highest and Organization of care and Living with stroke were ranked as a lower priority for research. This study provided an example of how to involve consumers in research priority setting successfully using a quantitative approach. Stroke research priorities from the consumer perspective were different from those of health professionals, as published in the literature; thus, consumer opinion should be considered when setting research priorities. Copyright © 2010 Society for Vascular Nursing, Inc. Published by Mosby, Inc. All rights reserved.

  10. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    PubMed

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation-relevance: medical evidence and economic criteria were thought to be relevant, but lobbying was thought to be irrelevant. Publicity: all cases lacked clear and effective mechanisms for publicity. REVISIONS: formal mechanisms, following the planning hierarchy, were considered less effective, informal political mechanisms were considered more effective. Canada and Norway had patients' relations officers to deal with patients' dissensions; however, revisions were more difficult in Uganda. Enforcement: leadership for ensuring decision-making fairness was not apparent. The different levels of priority setting in the three countries fulfilled varying conditions of accountability for reasonableness, none satisfied all the four conditions. To improve, decision makers at the three levels in all three cases should engage frontline practitioners, develop more effectively publicized reasons, and develop formal mechanisms for challenging and revising decisions.

  11. Implementing accountability for reasonableness framework at district level in Tanzania: a realist evaluation.

    PubMed

    Maluka, Stephen; Kamuzora, Peter; Sansebastián, Miguel; Byskov, Jens; Ndawi, Benedict; Olsen, Øystein E; Hurtig, Anna-Karin

    2011-02-10

    Despite the growing importance of the Accountability for Reasonableness (A4R) framework in priority setting worldwide, there is still an inadequate understanding of the processes and mechanisms underlying its influence on legitimacy and fairness, as conceived and reflected in service management processes and outcomes. As a result, the ability to draw scientifically sound lessons for the application of the framework to services and interventions is limited. This paper evaluates the experiences of implementing the A4R approach in Mbarali District, Tanzania, in order to find out how the innovation was shaped, enabled, and constrained by the interaction between contexts, mechanisms and outcomes. This study draws on the principles of realist evaluation -- a largely qualitative approach, chiefly concerned with testing and refining programme theories by exploring the complex interactions of contexts, mechanisms, and outcomes. Mixed methods were used in data collection, including individual interviews, non-participant observation, and document reviews. A thematic framework approach was adopted for the data analysis. The study found that while the A4R approach to priority setting was helpful in strengthening transparency, accountability, stakeholder engagement, and fairness, the efforts at integrating it into the current district health system were challenging. Participatory structures under the decentralisation framework, central government's call for partnership in district-level planning and priority setting, perceived needs of stakeholders, as well as active engagement between researchers and decision makers all facilitated the adoption and implementation of the innovation. In contrast, however, limited local autonomy, low level of public awareness, unreliable and untimely funding, inadequate accountability mechanisms, and limited local resources were the major contextual factors that hampered the full implementation. This study documents an important first step in the effort to introduce the ethical framework A4R into district planning processes. This study supports the idea that a greater involvement and accountability among local actors through the A4R process may increase the legitimacy and fairness of priority-setting decisions. Support from researchers in providing a broader and more detailed analysis of health system elements, and the socio-cultural context, could lead to better prediction of the effects of the innovation and pinpoint stakeholders' concerns, thereby illuminating areas that require special attention to promote sustainability.

  12. Integration of low level and ontology derived features for automatic weapon recognition and identification

    NASA Astrophysics Data System (ADS)

    Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore

    2011-06-01

    This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.

  13. A Multi-Faceted Approach to Successful Transition for Students with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Dubberly, Russell G.

    2011-01-01

    This report summarizes the multi-faceted, dynamic instructional model implemented to increase positive transition outcomes for high school students with intellectual disabilities. This report is based on the programmatic methods implemented within a secondary-level school in an urban setting. This pedagogical model facilitates the use of…

  14. Cultural Context and the New Communication Principles for Intercultural Communication.

    ERIC Educational Resources Information Center

    Waner, Karen K.; Winter, Janet K.

    This paper discusses the problems surrounding intercultural business communication as well as approaches to teaching college level business students appropriate communication skills for intercultural settings. Considered are the classification of countries by cultural context, i.e. how large a role culture, social values, and traditional social…

  15. 7 Steps to Better Reading--A Districtwide Approach

    ERIC Educational Resources Information Center

    Scroggins, John; Powers, Linda

    2004-01-01

    The Ponca City (Okla.) Public Schools set a higher literacy level as a goal, then developed a comprehensive plan to achieve that goal. A district focus, commitment, professional development for administrators and teachers, assessments to drive instruction, on-site coaching, monitoring, and careful record keeping all contributed to systemic reform.

  16. National Assessment Technical Quality.

    ERIC Educational Resources Information Center

    Chelimsky, Eleanor

    In 1991 the National Assessment Governing Board (NAGB) released a report interpreting the achievement of U.S. students in mathematics on the 1990 National Assessment of Educational Progress in terms of a set of performance standards. The NAGB had been designing and implementing an approach to defining basic, proficient, and advanced levels of…

  17. I Can Learn Japanese.

    ERIC Educational Resources Information Center

    Rubin, Michael; Funato, Makiko

    This set of materials for Japanese second language instruction was designed for students who can be taught most effectively through a functional, conversational approach. It is intended as a supplement to the regular course of study so that all students, regardless of ability level, can be provided with an effective instructional program. It…

  18. In the Valley of the Giants: Cultivating Intentionality and Integration

    ERIC Educational Resources Information Center

    Carey, Miriam

    2012-01-01

    This study examines the cultivation of intentionality and integration in a foundation level General Education class: Communities and Societies. Three research tasks were set within the context of a grounded theoretical approach: codification of indicators for both intentionality and integration, and an examination of student learning logs.…

  19. Teaching Deterministic Chaos through Music.

    ERIC Educational Resources Information Center

    Chacon, R.; And Others

    1992-01-01

    Presents music education as a setting for teaching nonlinear dynamics and chaotic behavior connected with fixed-point and limit-cycle attractors. The aim is not music composition but a first approach to an interdisciplinary tool suitable for a single-session class, at either the secondary or undergraduate level, for the introduction of these…

  20. Sustainability and Action Research in Universities: Towards Knowledge for Organisational Transformation

    ERIC Educational Resources Information Center

    Wooltorton, Sandra; Wilkinson, Anne; Horwitz, Pierre; Bahn, Sue; Redmond, Janice; Dooley, Julian

    2015-01-01

    Purpose: Academic approaches to the challenge of enhancing sustainability in research in university contexts illustrate that universities are affected by the very same values and socio-ecological issues they set out to address, making transformation difficult at every level. A theoretical and practical framework designed to facilitate cultural…

  1. Determination of Teacher Characteristics That Support Constructivist Learning Environments

    ERIC Educational Resources Information Center

    Aydogdu, Bulent; Selanik-Ay, Tugba

    2016-01-01

    Problem Statement: Exploring the variables that affect teachers' teaching approaches in learning environments is crucial to determining their response to new trends. Their teaching and learning characteristics set the success level of the new reforms. In addition, monitoring the usage of constructivist pedagogies and giving feedback about them are…

  2. Combining multiple tools outperforms individual methods in gene set enrichment analyses.

    PubMed

    Alhamdoosh, Monther; Ng, Milica; Wilson, Nicholas J; Sheridan, Julie M; Huynh, Huy; Wilson, Michael J; Ritchie, Matthew E

    2017-02-01

    Gene set enrichment (GSE) analysis allows researchers to efficiently extract biological insight from long lists of differentially expressed genes by interrogating them at a systems level. In recent years, there has been a proliferation of GSE analysis methods and hence it has become increasingly difficult for researchers to select an optimal GSE tool based on their particular dataset. Moreover, the majority of GSE analysis methods do not allow researchers to simultaneously compare gene set level results between multiple experimental conditions. The ensemble of genes set enrichment analyses (EGSEA) is a method developed for RNA-sequencing data that combines results from twelve algorithms and calculates collective gene set scores to improve the biological relevance of the highest ranked gene sets. EGSEA's gene set database contains around 25 000 gene sets from sixteen collections. It has multiple visualization capabilities that allow researchers to view gene sets at various levels of granularity. EGSEA has been tested on simulated data and on a number of human and mouse datasets and, based on biologists' feedback, consistently outperforms the individual tools that have been combined. Our evaluation demonstrates the superiority of the ensemble approach for GSE analysis, and its utility to effectively and efficiently extrapolate biological functions and potential involvement in disease processes from lists of differentially regulated genes. EGSEA is available as an R package at http://www.bioconductor.org/packages/EGSEA/ . The gene sets collections are available in the R package EGSEAdata from http://www.bioconductor.org/packages/EGSEAdata/ . monther.alhamdoosh@csl.com.au mritchie@wehi.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  3. Relationship between molecular connectivity and carcinogenic activity: a confirmation with a new software program based on graph theory.

    PubMed Central

    Malacarne, D; Pesenti, R; Paolucci, M; Parodi, S

    1993-01-01

    For a database of 826 chemicals tested for carcinogenicity, we fragmented the structural formula of the chemicals into all possible contiguous-atom fragments with size between two and eight (nonhydrogen) atoms. The fragmentation was obtained using a new software program based on graph theory. We used 80% of the chemicals as a training set and 20% as a test set. The two sets were obtained by random sorting. From the training sets, an average (8 computer runs with independently sorted chemicals) of 315 different fragments were significantly (p < 0.125) associated with carcinogenicity or lack thereof. Even using this relatively low level of statistical significance, 23% of the molecules of the test sets lacked significant fragments. For 77% of the molecules of the test sets, we used the presence of significant fragments to predict carcinogenicity. The average level of accuracy of the predictions in the test sets was 67.5%. Chemicals containing only positive fragments were predicted with an accuracy of 78.7%. The level of accuracy was around 60% for chemicals characterized by contradictory fragments or only negative fragments. In a parallel manner, we performed eight paired runs in which carcinogenicity was attributed randomly to the molecules of the training sets. The fragments generated by these pseudo-training sets were devoid of any predictivity in the corresponding test sets. Using an independent software program, we confirmed (for the complex biological endpoint of carcinogenicity) the validity of a structure-activity relationship approach of the type proposed by Klopman and Rosenkranz with their CASE program. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. PMID:8275991

  4. Identification of competencies for patient education in physiotherapy using a Delphi approach.

    PubMed

    Forbes, Roma; Mandrusiak, Allison; Smith, Michelle; Russell, Trevor

    2018-06-01

    Patient education is a critical part of physiotherapy practice however an empirically derived set of competencies for its use does not exist. This study aimed to generate a set of competencies for patient education in physiotherapy using a consensus approach. A Delphi study with two rounds using a panel of expert physiotherapists within Australia was undertaken. In the first round, the panel of 12 specialist physiotherapists identified competencies required for patient education in the physiotherapy setting. Framework analysis was applied to develop a set of competencies that were assessed in the second round where ≥80% agreement of importance from the panel indicated consensus. Response rates of specialist physiotherapists agreeing to participate were 67% for the first round and 100% for the second round. Analysis following the first round produced 25 competencies. The second round resulted in agreement on a final set of 22 competencies. This study developed a concise list of competencies for patient education with a high level of expert agreement. By identifying the key competencies in this area, there is potential to benchmark patient education training and assessment of physiotherapists for improved educational and professional outcomes. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  5. Collaborative knowledge acquisition for the design of context-aware alert systems.

    PubMed

    Joffe, Erel; Havakuk, Ofer; Herskovic, Jorge R; Patel, Vimla L; Bernstam, Elmer Victor

    2012-01-01

    To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts. Five internal medicine residents reviewed 18 anemia alerts, while 'talking aloud'. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms. The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77). It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation. Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules.

  6. Use of the Threshold of Toxicological Concern (TTC) approach for deriving target values for drinking water contaminants.

    PubMed

    Mons, M N; Heringa, M B; van Genderen, J; Puijker, L M; Brand, W; van Leeuwen, C J; Stoks, P; van der Hoek, J P; van der Kooij, D

    2013-03-15

    Ongoing pollution and improving analytical techniques reveal more and more anthropogenic substances in drinking water sources, and incidentally in treated water as well. In fact, complete absence of any trace pollutant in treated drinking water is an illusion as current analytical techniques are capable of detecting very low concentrations. Most of the substances detected lack toxicity data to derive safe levels and have not yet been regulated. Although the concentrations in treated water usually do not have adverse health effects, their presence is still undesired because of customer perception. This leads to the question how sensitive analytical methods need to become for water quality screening, at what levels water suppliers need to take action and how effective treatment methods need to be designed to remove contaminants sufficiently. Therefore, in the Netherlands a clear and consistent approach called 'Drinking Water Quality for the 21st century (Q21)' has been developed within the joint research program of the drinking water companies. Target values for anthropogenic drinking water contaminants were derived by using the recently introduced Threshold of Toxicological Concern (TTC) approach. The target values for individual genotoxic and steroid endocrine chemicals were set at 0.01 μg/L. For all other organic chemicals the target values were set at 0.1 μg/L. The target value for the total sum of genotoxic chemicals, the total sum of steroid hormones and the total sum of all other organic compounds were set at 0.01, 0.01 and 1.0 μg/L, respectively. The Dutch Q21 approach is further supplemented by the standstill-principle and effect-directed testing. The approach is helpful in defining the goals and limits of future treatment process designs and of analytical methods to further improve and ensure the quality of drinking water, without going to unnecessary extents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu; Sun, Hui; Cheng, Li-Tien

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. Wemore » also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach.« less

  8. Priority setting and implementation in a centralized health system: a case study of Kerman province in Iran.

    PubMed

    Khayatzadeh-Mahani, Akram; Fotaki, Marianna; Harvey, Gillian

    2013-08-01

    The question of how priority setting processes work remains topical, contentious and political in every health system across the globe. It is particularly acute in the context of developing countries because of the mismatch between needs and resources, which is often compounded by an underdeveloped capacity for decision making and weak institutional infrastructures. Yet there is limited research into how the process of setting and implementing health priorities works in developing countries. This study aims to address this gap by examining how a national priority setting programme works in the centralized health system of Iran and what factors influence its implementation at the meso and micro levels. We used a qualitative case study approach, incorporating mixed methods: in-depth interviews at three levels and a textual analysis of policy documents. The data analysis showed that the process of priority setting is non-systematic, there is little transparency as to how specific priorities are decided, and the decisions made are separated from their implementation. This is due to the highly centralized system, whereby health priorities are set at the macro level without involving meso or micro local levels or any representative of the public. Furthermore, the two main benefit packages are decided by different bodies (Ministry of Health and Medical Education and Ministry of Welfare and Social Security) and there is no co-ordination between them. The process is also heavily influenced by political pressure exerted by various groups, mostly medical professionals who attempt to control priority setting in accordance with their interests. Finally, there are many weaknesses in the implementation of priorities, resulting in a growing gap between rural and urban areas in terms of access to health services.

  9. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  10. Managing healthcare budgets in times of austerity: the role of program budgeting and marginal analysis.

    PubMed

    Mitton, Craig; Dionne, Francois; Donaldson, Cam

    2014-04-01

    Given limited resources, priority setting or choice making will remain a reality at all levels of publicly funded healthcare across countries for many years to come. The pressures may well be even more acute as the impact of the economic crisis of 2008 continues to play out but, even as economies begin to turn around, resources within healthcare will be limited, thus some form of rationing will be required. Over the last few decades, research on healthcare priority setting has focused on methods of implementation as well as on the development of approaches related to fairness and legitimacy and on more technical aspects of decision making including the use of multi-criteria decision analysis. Recently, research has led to better understanding of evaluating priority setting activity including defining 'success' and articulating key elements for high performance. This body of research, however, often goes untapped by those charged with making challenging decisions and as such, in line with prevailing public sector incentives, decisions are often reliant on historical allocation patterns and/or political negotiation. These archaic and ineffective approaches not only lead to poor decisions in terms of value for money but further do not reflect basic ethical conditions that can lead to fairness in the decision-making process. The purpose of this paper is to outline a comprehensive approach to priority setting and resource allocation that has been used in different contexts across countries. This will provide decision makers with a single point of access for a basic understanding of relevant tools when faced with having to make difficult decisions about what healthcare services to fund and what not to fund. The paper also addresses several key issues related to priority setting including how health technology assessments can be used, how performance can be improved at a practical level, and what ongoing resource management practice should look like. In terms of future research, one of the most important areas of priority setting that needs further attention is how best to engage public members.

  11. From bird's eye views to molecular communities: two-layered visualization of structure-activity relationships in large compound data sets

    NASA Astrophysics Data System (ADS)

    Kayastha, Shilva; Kunimoto, Ryo; Horvath, Dragos; Varnek, Alexandre; Bajorath, Jürgen

    2017-11-01

    The analysis of structure-activity relationships (SARs) becomes rather challenging when large and heterogeneous compound data sets are studied. In such cases, many different compounds and their activities need to be compared, which quickly goes beyond the capacity of subjective assessments. For a comprehensive large-scale exploration of SARs, computational analysis and visualization methods are required. Herein, we introduce a two-layered SAR visualization scheme specifically designed for increasingly large compound data sets. The approach combines a new compound pair-based variant of generative topographic mapping (GTM), a machine learning approach for nonlinear mapping, with chemical space networks (CSNs). The GTM component provides a global view of the activity landscapes of large compound data sets, in which informative local SAR environments are identified, augmented by a numerical SAR scoring scheme. Prioritized local SAR regions are then projected into CSNs that resolve these regions at the level of individual compounds and their relationships. Analysis of CSNs makes it possible to distinguish between regions having different SAR characteristics and select compound subsets that are rich in SAR information.

  12. Noise and performance calibration study of a Mach 2.2 supersonic cruise aircraft

    NASA Technical Reports Server (NTRS)

    Mascitti, V. R.; Maglieri, D. J.

    1979-01-01

    The baseline configuration of a Mach 2.2 supersonic cruise concept employing a 1980 - 1985 technology level, dry turbojet, mechanically suppressed engine, was calibrated to identify differences in noise levels and performance as determined by the methodology and ground rules used. In addition, economic and noise information is provided consistent with a previous study based on an advanced technology Mach 2.7 configuration, reported separately. Results indicate that the difference between NASA and manufacturer performance methodology is small. Resizing the aircraft to NASA groundrules results in negligible changes in takeoff noise levels (less than 1 EPNdB) but approach noise is reduced by 5.3 EPNdB as a result of increasing approach speed. For the power setting chosen, engine oversizing resulted in no reduction in traded noise. In terms of summated noise level, a 6 EPNdB reduction is realized for a 5% increase in total operating costs.

  13. Population-level approaches to universal health coverage in resource-poor settings: lessons from tobacco control policy in Vietnam.

    PubMed

    Higashi, Hideki; Khuong, Tuan A; Ngo, Anh D; Hill, Peter S

    2011-07-01

    Population-based health promotion and disease prevention approaches are essential elements in achieving universal health coverage; yet they frequently do not appear on national policy agendas. This paper suggests that resource-poor countries should take greater advantage of such approaches to reach all segments of the population to positively affect health outcomes and equity, especially considering the epidemic of chronic non-communicable diseases and associated modifiable risk factors. Tobacco control policy development and implementation in Vietnam provides a case study to discuss opportunities and challenges associated with such strategies.

  14. Introducing Astronomy Related Research into Non-Astronomy Courses

    NASA Astrophysics Data System (ADS)

    Walker, Douglas

    The concern over the insufficient number of students choosing to enter the science and engineering fields has been discussed and documented for years. While historically addressed at the national level, many states are now recognizing that the lack of a highly-skilled technical workforce within their states' borders has a significant effect on their economic health. Astronomy, as a science field, is no exception. Articles appear periodically in the most popular astronomy magazines asking the question, "Where are the young astronomers?" Astronomy courses at the community college level are normally restricted to introductory astronomy I and II level classes that introduce the student to the basics of the night sky and astronomy. The vast majority of these courses is geared toward the non-science major and is considered by many students to be easy and watered down courses in comparison to typical physics and related science courses. A majority of students who enroll in these classes are not considering majors in science or astronomy since they believe that science is "boring and won't produce any type of career for them." Is there any way to attract students? This paper discusses an approach being undertaken at the Estrella Mountain Community College to introduce students in selected mathematics courses to aspects of astronomy related research to demonstrate that science is anything but boring. Basic statistical techniques and understanding of geometry are applied to a large virgin data set containing the magnitudes and phase characteristics of sets of variable stars. The students' work consisted of developing and presenting a project that explored analyzing selected aspects of the variable star data set. The description of the data set, the approach the students took for research projects, and results from a survey conducted at semester's end to determine if student's interest and appreciation of astronomy was affected are presented. Using the data set provided, the students were provided the opportunity for original research and discoveries.

  15. With a little help from a computer: discriminating between bacterial and viral meningitis based on dominance-based rough set approach analysis

    PubMed Central

    Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek

    2017-01-01

    Abstract Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity. We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library. In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable. Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL−1, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL−1 leukocytes in CSF. We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now. PMID:28796045

  16. With a little help from a computer: discriminating between bacterial and viral meningitis based on dominance-based rough set approach analysis.

    PubMed

    Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek

    2017-08-01

    Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity.We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library.In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable.Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL leukocytes in CSF.We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now.

  17. Proactive Approach for Safe Use of Antimicrobial Coatings in Healthcare Settings: Opinion of the COST Action Network AMiCI

    PubMed Central

    Ahonen, Merja; Kahru, Anne; Ivask, Angela; Kasemets, Kaja; Kõljalg, Siiri; Mantecca, Paride; Vinković Vrček, Ivana; Keinänen-Toivola, Minna M.; Crijns, Francy

    2017-01-01

    Infections and infectious diseases are considered a major challenge to human health in healthcare units worldwide. This opinion paper was initiated by EU COST Action network AMiCI (AntiMicrobial Coating Innovations) and focuses on scientific information essential for weighing the risks and benefits of antimicrobial surfaces in healthcare settings. Particular attention is drawn on nanomaterial-based antimicrobial surfaces in frequently-touched areas in healthcare settings and the potential of these nano-enabled coatings to induce (eco)toxicological hazard and antimicrobial resistance. Possibilities to minimize those risks e.g., at the level of safe-by-design are demonstrated. PMID:28362344

  18. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  19. Turbopump Design and Analysis Approach for Nuclear Thermal Rockets

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng S.; Veres, Joseph P.; Fittje, James E.

    2006-01-01

    A rocket propulsion system, whether it is a chemical rocket or a nuclear thermal rocket, is fairly complex in detail but rather simple in principle. Among all the interacting parts, three components stand out: they are pumps and turbines (turbopumps), and the thrust chamber. To obtain an understanding of the overall rocket propulsion system characteristics, one starts from analyzing the interactions among these three components. It is therefore of utmost importance to be able to satisfactorily characterize the turbopump, level by level, at all phases of a vehicle design cycle. Here at NASA Glenn Research Center, as the starting phase of a rocket engine design, specifically a Nuclear Thermal Rocket Engine design, we adopted the approach of using a high level system cycle analysis code (NESS) to obtain an initial analysis of the operational characteristics of a turbopump required in the propulsion system. A set of turbopump design codes (PumpDes and TurbDes) were then executed to obtain sizing and performance characteristics of the turbopump that were consistent with the mission requirements. A set of turbopump analyses codes (PUMPA and TURBA) were applied to obtain the full performance map for each of the turbopump components; a two dimensional layout of the turbopump based on these mean line analyses was also generated. Adequacy of the turbopump conceptual design will later be determined by further analyses and evaluation. In this paper, descriptions and discussions of the aforementioned approach are provided and future outlooks are discussed.

  20. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    PubMed

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  1. Using modular psychotherapy in school mental health: Provider perspectives on intervention-setting fit

    PubMed Central

    Lyon, Aaron R.; Ludwig, Kristy; Romano, Evalynn; Koltracht, Jane; Stoep, Ann Vander; McCauley, Elizabeth

    2013-01-01

    Objective The “fit” or appropriateness of well-researched interventions within usual care contexts is among the most commonly-cited, but infrequently researched, factors in the successful implementation of new practices. The current study was initiated to address two exploratory research questions: (1) How do clinicians describe their current school mental health service delivery context? and (2) How do clinicians describe the fit between modular psychotherapy and multiple levels of the school mental health service delivery context? Method Following a year-long training and consultation program in an evidence-based, modular approach to psychotherapy, semi-structured qualitative interviews were conducted with seventeen school-based mental health providers to evaluate their perspectives on the appropriateness of implementing the approach within a system of school-based health centers. Interviews were transcribed and coded for themes using conventional and directed content analysis. Results Findings identified key elements of the school mental health context including characteristics of the clinicians, their practices, the school context, and the service recipients. Specific evaluation of intervention-setting appropriateness elicited many comments about both practical and value-based (e.g., cultural considerations) aspects at the clinician and client levels, but fewer comments at the school or organizational levels. Conclusions Results suggest that a modular approach may fit well with the school mental health service context, especially along practical aspects of appropriateness. Future research focused on the development of methods for routinely assessing appropriateness at different stages of the implementation process is recommended. PMID:24134063

  2. Topology optimization analysis based on the direct coupling of the boundary element method and the level set method

    NASA Astrophysics Data System (ADS)

    Vitório, Paulo Cezar; Leonel, Edson Denner

    2017-12-01

    The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.

  3. A Proposal to Plan and Develop a Sample Set of Drill and Testing Materials, Based on Audio and Visual Environmental and Situational Stimuli, Aimed at Training and Testing in the Creation of Original Utterances by Foreign Language Students at the Secondary and College Levels.

    ERIC Educational Resources Information Center

    Obrecht, Dean H.

    This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…

  4. Mapping soil types from multispectral scanner data.

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Zachary, A. L.

    1971-01-01

    Multispectral remote sensing and computer-implemented pattern recognition techniques were used for automatic ?mapping' of soil types. This approach involves subjective selection of a set of reference samples from a gray-level display of spectral variations which was generated by a computer. Each resolution element is then classified using a maximum likelihood ratio. Output is a computer printout on which the researcher assigns a different symbol to each class. Four soil test areas in Indiana were experimentally examined using this approach, and partially successful results were obtained.

  5. A standard satellite control reference model

    NASA Technical Reports Server (NTRS)

    Golden, Constance

    1994-01-01

    This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.

  6. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  7. Heuristic approaches for energy-efficient shared restoration in WDM networks

    NASA Astrophysics Data System (ADS)

    Alilou, Shahab

    In recent years, there has been ongoing research on the design of energy-efficient Wavelength Division Multiplexing (WDM) networks. The explosive growth of Internet traffic has led to increased power consumption of network components. Network survivability has also been a relevant research topic, as it plays a crucial role in assuring continuity of service with no disruption, regardless of network component failure. Network survivability mechanisms tend to utilize considerable resources such as spare capacity in order to protect and restore information. This thesis investigates techniques for reducing energy demand and enhancing energy efficiency in the context of network survivability. We propose two novel heuristic energy-efficient shared protection approaches for WDM networks. These approaches intend to save energy by setting on sleep mode devices that are not being used while providing shared backup paths to satisfy network survivability. The first approach exploits properties of a math series in order to assign weight to the network links. It aims at reducing power consumption at the network indirectly by aggregating traffic on a set of nodes and links with high traffic load level. Routing traffic on links and nodes that are already under utilization makes it possible for the links and nodes with no load to be set on sleep mode. The second approach is intended to dynamically route traffic through nodes and links with high traffic load level. Similar to the first approach, this approach computes a pair of paths for every newly arrived demand. It computes these paths for every new demand by comparing the power consumption of nodes and links in the network before the demand arrives with their potential power consumption if they are chosen along the paths of this demand. Simulations of two different networks were used to compare the total network power consumption obtained using the proposed techniques against a standard shared-path restoration scheme. Shared-path restoration is a network survivability method in which a link-disjoint backup path and wavelength is reserved at the time of call setup for a working path. However, in order to reduce spare capacity consumption, this reserved backup path and wavelength may be shared with other backup paths. Pool Sharing Scheme (PSS) is employed to implement shared-path restoration scheme [1]. In an optical network, the failure of a single link leads to the failure of all the lightpaths that pass through that particular link. PSS ensures that the amount of backup bandwidth required on a link to restore the failed connections will not be more than the total amount of reserved backup bandwidth on that link. Simulation results indicate that the proposed approaches lead to up to 35% power savings in WDM networks when traffic load is low. However, power saving decreases to 14% at high traffic load level. Furthermore, in terms of the total capacity consumption for working paths, PSS outperforms the two proposed approaches, as expected. In terms of total capacity consumption all the approaches behave similarly. In general, at low traffic load level, the two proposed approaches behave similar to PSS in terms of average link load, and the ratio of block demands. Nevertheless, at high traffic load, the proposed approaches result in higher ratio of blocked demands than PSS. They also lead to higher average link load than PSS for the equal number of generated demands.

  8. A large-scale, long-term study of scale drift: The micro view and the macro view

    NASA Astrophysics Data System (ADS)

    He, W.; Li, S.; Kingsbury, G. G.

    2016-11-01

    The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.

  9. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  10. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  11. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  12. Deep learning with non-medical training used for chest pathology identification

    NASA Astrophysics Data System (ADS)

    Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit

    2015-03-01

    In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.

  13. Benchmarking of density functionals for a soft but accurate prediction and assignment of (1) H and (13)C NMR chemical shifts in organic and biological molecules.

    PubMed

    Benassi, Enrico

    2017-01-15

    A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Emergency and urgent care capacity in a resource-limited setting: an assessment of health facilities in western Kenya

    PubMed Central

    Burke, Thomas F; Hines, Rosemary; Ahn, Roy; Walters, Michelle; Young, David; Anderson, Rachel Eleanor; Tom, Sabrina M; Clark, Rachel; Obita, Walter; Nelson, Brett D

    2014-01-01

    Objective Injuries, trauma and non-communicable diseases are responsible for a rising proportion of death and disability in low-income and middle-income countries. Delivering effective emergency and urgent healthcare for these and other conditions in resource-limited settings is challenging. In this study, we sought to examine and characterise emergency and urgent care capacity in a resource-limited setting. Methods We conducted an assessment within all 30 primary and secondary hospitals and within a stratified random sampling of 30 dispensaries and health centres in western Kenya. The key informants were the most senior facility healthcare provider and manager available. Emergency physician researchers utilised a semistructured assessment tool, and data were analysed using descriptive statistics and thematic coding. Results No lower level facilities and 30% of higher level facilities reported having a defined, organised approach to trauma. 43% of higher level facilities had access to an anaesthetist. The majority of lower level facilities had suture and wound care supplies and gloves but typically lacked other basic trauma supplies. For cardiac care, 50% of higher level facilities had morphine, but a minority had functioning ECG, sublingual nitroglycerine or a defibrillator. Only 20% of lower level facilities had glucometers, and only 33% of higher level facilities could care for diabetic emergencies. No facilities had sepsis clinical guidelines. Conclusions Large gaps in essential emergency care capabilities were identified at all facility levels in western Kenya. There are great opportunities for a universally deployed basic emergency care package, an advanced emergency care package and facility designation scheme, and a reliable prehospital care transportation and communications system in resource-limited settings. PMID:25260371

  15. Comprehensive Yet Scalable Health Information Systems for Low Resource Settings: A Collaborative Effort in Sierra Leone

    PubMed Central

    Braa, Jørn; Kanter, Andrew S.; Lesh, Neal; Crichton, Ryan; Jolliffe, Bob; Sæbø, Johan; Kossi, Edem; Seebregts, Christopher J.

    2010-01-01

    We address the problem of how to integrate health information systems in low-income African countries in which technical infrastructure and human resources vary wildly within countries. We describe a set of tools to meet the needs of different service areas including managing aggregate indicators, patient level record systems, and mobile tools for community outreach. We present the case of Sierra Leone and use this case to motivate and illustrate an architecture that allows us to provide services at each level of the health system (national, regional, facility and community) and provide different configurations of the tools as appropriate for the individual area. Finally, we present a, collaborative implementation of this approach in Sierra Leone. PMID:21347003

  16. Understanding the DSM-5: stasis and change.

    PubMed

    Cooper, Rachel

    2018-03-01

    This paper aims to understand the DSM-5 through situating it within the context of the historical development of the DSM series. When one looks at the sets of diagnostic criteria, the DSM-5 is strikingly similar to the DSM-IV. I argue that at this level the DSM has become 'locked-in' and difficult to change. At the same time, at the structural, or conceptual, level there have been radical changes, for example in the definition of 'mental disorder', in the role of theory and of values, and in the abandonment of the multiaxial approach to diagnosis. The way that the DSM-5 was constructed means that the overall conceptual framework of the classification only barely constrains the sets of diagnostic criteria it contains.

  17. Hierarchical rank and women's organizational mobility: glass ceilings in corporate law firms.

    PubMed

    Gorman, Elizabeth H; Kmec, Julie A

    2009-03-01

    This article revives the debate over whether women's upward mobility prospects decline as they climb organizational hierarchies. Although this proposition is a core element of the "glass ceiling" metaphor, it has failed to gain strong support in previous research. The article establishes a firm theoretical foundation for expecting an increasing female disadvantage, with an eye toward defining the scope conditions and extending the model to upper-level external hires. The approach is illustrated in an empirical setting that meets the proposed scope conditions: corporate law firms in the United States. Results confirm that in this setting, the female mobility disadvantage is greater at higher organizational levels in the case of internal promotions, but not in the case of external hires.

  18. The interaction of MnH(X 7Σ+) with He: Ab initio potential energy surface and bound states

    NASA Astrophysics Data System (ADS)

    Turpin, Florence; Halvick, Philippe; Stoecklin, Thierry

    2010-06-01

    The potential energy surface of the ground state of the He-MnH(X Σ7+) van der Waals complex is presented. Within the supermolecular approach of intermolecular energy calculations, a grid of ab initio points was computed at the multireference configuration interaction level using the aug-cc-pVQZ basis set for helium and hydrogen and the relativistic aug-cc-pVQZ-DK basis set for manganese. The potential energy surface was then fitted to a global analytical form which main features are discussed. As a first application of this potential energy surface, we present accurate calculations of bound energy levels of the H3e-MnH and H4e-MnH complexes.

  19. The interaction of MnH(X 7Sigma+) with He: ab initio potential energy surface and bound states.

    PubMed

    Turpin, Florence; Halvick, Philippe; Stoecklin, Thierry

    2010-06-07

    The potential energy surface of the ground state of the He-MnH(X (7)Sigma(+)) van der Waals complex is presented. Within the supermolecular approach of intermolecular energy calculations, a grid of ab initio points was computed at the multireference configuration interaction level using the aug-cc-pVQZ basis set for helium and hydrogen and the relativistic aug-cc-pVQZ-DK basis set for manganese. The potential energy surface was then fitted to a global analytical form which main features are discussed. As a first application of this potential energy surface, we present accurate calculations of bound energy levels of the (3)He-MnH and (4)He-MnH complexes.

  20. Control mechanisms for stochastic biochemical systems via computation of reachable sets.

    PubMed

    Lakatos, Eszter; Stumpf, Michael P H

    2017-08-01

    Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters.

  1. Control mechanisms for stochastic biochemical systems via computation of reachable sets

    PubMed Central

    Lakatos, Eszter

    2017-01-01

    Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters. PMID:28878957

  2. 'Seed + expand': a general methodology for detecting publication oeuvres of individual researchers.

    PubMed

    Reijnhoudt, Linda; Costas, Rodrigo; Noyons, Ed; Börner, Katy; Scharnhorst, Andrea

    2014-01-01

    The study of science at the individual scholar level requires the disambiguation of author names. The creation of author's publication oeuvres involves matching the list of unique author names to names used in publication databases. Despite recent progress in the development of unique author identifiers, e.g., ORCID, VIVO, or DAI, author disambiguation remains a key problem when it comes to large-scale bibliometric analysis using data from multiple databases. This study introduces and tests a new methodology called seed + expand for semi-automatic bibliographic data collection for a given set of individual authors. Specifically, we identify the oeuvre of a set of Dutch full professors during the period 1980-2011. In particular, we combine author records from a Dutch National Research Information System (NARCIS) with publication records from the Web of Science. Starting with an initial list of 8,378 names, we identify 'seed publications' for each author using five different approaches. Subsequently, we 'expand' the set of publications in three different approaches. The different approaches are compared and resulting oeuvres are evaluated on precision and recall using a 'gold standard' dataset of authors for which verified publications in the period 2001-2010 are available.

  3. Flu Diagnosis System Using Jaccard Index and Rough Set Approaches

    NASA Astrophysics Data System (ADS)

    Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip

    2018-04-01

    Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.

  4. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  5. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  7. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  8. An Integrated Approach to Exploration Launch Office Requirements Development

    NASA Technical Reports Server (NTRS)

    Holladay, Jon B.; Langford, Gary

    2006-01-01

    The proposed paper will focus on the Project Management and Systems Engineering approach utilized to develop a set of both integrated and cohesive requirements for the Exploration Launch Office, within the Constellation Program. A summary of the programmatic drivers which influenced the approach along with details of the resulting implementation will be discussed as well as metrics evaluating the efficiency and accuracy of the various requirements development activities. Requirements development activities will focus on the procedures utilized to ensure that technical content was valid and mature in preparation for the Crew Launch Vehicle and Constellation System s Requirements Reviews. This discussion will begin at initial requirements development during the Exploration Systems Architecture Study and progress through formal development of the program structure. Specific emphasis will be given to development and validation of the requirements. This discussion will focus on approaches to garner the appropriate requirement owners (or customers), project infrastructure utilized to emphasize proper integration, and finally the procedure to technically mature, verify and validate the requirements. Examples of requirements being implemented on the Launch Vehicle (systems, interfaces, test & verification) will be utilized to demonstrate the various processes and also provide a top level understanding of the launch vehicle(s) performance goals. Details may also be provided on the approaches for verification, which range from typical aerospace hardware development (qualification/acceptance) through flight certification (flight test, etc.). The primary intent of this paper is to provide a demonstrated procedure for the development of a mature, effective, integrated set of requirements on a complex system, which also has the added intricacies of both heritage and new hardware development integration. Ancillary focus of the paper will include discussion of Test and Verification approaches along with top level systems/elements performance capabilities.

  9. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    PubMed

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  10. Besides Precision & Recall: Exploring Alternative Approaches to Evaluating an Automatic Indexing Tool for MEDLINE

    PubMed Central

    Névéol, Aurélie; Zeng, Kelly; Bodenreider, Olivier

    2006-01-01

    Objective This paper explores alternative approaches for the evaluation of an automatic indexing tool for MEDLINE, complementing the traditional precision and recall method. Materials and methods The performance of MTI, the Medical Text Indexer used at NLM to produce MeSH recommendations for biomedical journal articles is evaluated on a random set of MEDLINE citations. The evaluation examines semantic similarity at the term level (indexing terms). In addition, the documents retrieved by queries resulting from MTI index terms for a given document are compared to the PubMed related citations for this document. Results Semantic similarity scores between sets of index terms are higher than the corresponding Dice similarity scores. Overall, 75% of the original documents and 58% of the top ten related citations are retrieved by queries based on the automatic indexing. Conclusions The alternative measures studied in this paper confirm previous findings and may be used to select particular documents from the test set for a more thorough analysis. PMID:17238409

  11. Besides precision & recall: exploring alternative approaches to evaluating an automatic indexing tool for MEDLINE.

    PubMed

    Neveol, Aurélie; Zeng, Kelly; Bodenreider, Olivier

    2006-01-01

    This paper explores alternative approaches for the evaluation of an automatic indexing tool for MEDLINE, complementing the traditional precision and recall method. The performance of MTI, the Medical Text Indexer used at NLM to produce MeSH recommendations for biomedical journal articles is evaluated on a random set of MEDLINE citations. The evaluation examines semantic similarity at the term level (indexing terms). In addition, the documents retrieved by queries resulting from MTI index terms for a given document are compared to the PubMed related citations for this document. Semantic similarity scores between sets of index terms are higher than the corresponding Dice similarity scores. Overall, 75% of the original documents and 58% of the top ten related citations are retrieved by queries based on the automatic indexing. The alternative measures studied in this paper confirm previous findings and may be used to select particular documents from the test set for a more thorough analysis.

  12. The Potentials of Fas Receptors and Ligands in Monitoring HIV-1 Disease in Children in Yaoundé, Cameroon.

    PubMed

    Ikomey, G; Assoumou, M-C Okomo; Atashili, J; Mesembe, M; Mukwele, B; Lyonga, E; Eyoh, A; Kafando, A; Ndumbe, P M

    2016-09-01

    Difficulties in systematically monitoring HIV viral load in resource-limited settings prompt the search for alternate approaches. The authors aimed at assessing the correlation between the plasma levels of soluble forms of Fas receptors (Fas) and Fas ligands (FasL) with standard indicators of HIV disease progression in children. Twenty-two HIV-1-positive children were enrolled in Yaounde. CD4 counts, CD4% counts, plasma levels of Fas, FasL, and HIV-1 RNA levels were assayed. The correlation coefficients (P values) between FasL levels and each of HIV-1 viral load, CD4 counts, and CD4% were, respectively, .56 (.01), -.29 (.18), and .30 (.18). On the other hand, the respective correlation coefficients (P values) with Fas levels were .12 (.60), -.30 (.18), and -.29 (.19). The significant correlation between levels of HIV-1 viral load and FasL suggests that the latter needs to be further studied as a potential biomarker to monitor HIV-1 disease progression in children in resource-limited setting. © The Author(s) 2013.

  13. Using risk adjustment approaches in child welfare performance measurement: Applications and insights from health and mental health settings

    PubMed Central

    Raghavan, Ramesh

    2014-01-01

    Federal policymaking in the last decade has dramatically expanded performance measurement within child welfare systems, and states are currently being fiscally penalized for poor performance on defined outcomes. However, in contrast to performance measurement in health settings, current policy holds child welfare systems solely responsible for meeting outcomes, largely without taking into account the effects of factors at the level of the child, and his or her social ecology, that might undermine the performance of child welfare agencies. Appropriate measurement of performance is predicated upon the ability to disentangle individual, as opposed to organizational, determinants of outcomes, which is the goal of risk adjustment methodologies. This review briefly conceptualizes and examines risk adjustment approaches in health and child welfare, suggests approaches to expanding its use to appropriately measure the performance of child welfare agencies, and highlights research gaps that diminish the appropriate use of risk adjustment approaches – and which consequently suggest the need for caution – in policymaking around performance measurement of child welfare agencies. PMID:25253917

  14. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  15. Review and evaluation of recent developments in melic inlet dynamic flow distortion prediction and computer program documentation and user's manual estimating maximum instantaneous inlet flow distortion from steady-state total pressure measurements with full, limited, or no dynamic data

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Dennon, S. R.

    1986-01-01

    A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.

  16. Environmental Learning Experiences: Bio-Physical, Senior High School.

    ERIC Educational Resources Information Center

    Junglas, Mary R.; And Others

    This environmental education curriculum guide was developed for teacher use at the senior high school level. Although the guide deals with the bio-physical aspects of the environment, it is designed to encourage an integration of the disciplines into an inter-disciplinary approach. The volume consists of a set of ideas, activities, and opinions…

  17. Environmental Learning Experiences: Socio-Cultural, Junior High School.

    ERIC Educational Resources Information Center

    Junglas, Mary R.; And Others

    This environmental education curriculum guide was developed for teacher use at the junior high school level. Although the guide deals with the socio-cultural aspects of the environment, it is designed to encourage an integration of the disciplines into an inter-disciplinary approach. The volume consists of a set of ideas, activities, and opinions…

  18. A School-Wide Approach to Student-Led Conferences: A Practitioner's Guide.

    ERIC Educational Resources Information Center

    Kinney, Patti; Munroe, Mary Beth; Sessions, Pam

    Noting that the benefits of student-led conferences align well with practices recognized as developmentally appropriate for the middle school years, this book provides a step-by-step guide to implementing student-led conferences at the middle school level. The chapters are: (1) "Setting the Stage," presenting the rationale for student-led…

  19. Goal Attainment Scaling as an Outcome Measure in Randomized Controlled Trials of Psychosocial Interventions in Autism

    ERIC Educational Resources Information Center

    Ruble, Lisa; McGrew, John H.; Toland, Michael D.

    2012-01-01

    Goal attainment scaling (GAS) holds promise as an idiographic approach for measuring outcomes of psychosocial interventions in community settings. GAS has been criticized for untested assumptions of scaling level (i.e., interval or ordinal), inter-individual equivalence and comparability, and reliability of coding across different behavioral…

  20. An Overview of the Environmental Knowledge System for Elementary School Students

    ERIC Educational Resources Information Center

    Xuehua, Zhang

    2004-01-01

    Environmental education should set different objectives for different learning subjects. At the elementary school level, the primary goal is to establish environmental awareness so that students can perceptually appreciate and comprehend how rich and colorful the environment is. In this article, the author discusses a systematic approach that can…

  1. Applying the Varieties of Capitalism Approach to Higher Education: Comparing the Internationalisation of German and British Universities

    ERIC Educational Resources Information Center

    Graf, Lukas

    2009-01-01

    In recent years, the global market for higher education has expanded rapidly, while internationalisation strategies have been developed at university, national and European levels to increase the competitiveness of higher education institutions. This article asks how institutional settings prevailing in national models of capitalism motivate…

  2. Dialogic Feedback as Divergent Assessment for Learning: An Ecological Approach to Teacher Professional Development

    ERIC Educational Resources Information Center

    Charteris, Jennifer

    2016-01-01

    Neoliberal policy objectives perpetuate an audit culture at both school and system levels. The associated focus on performativity and accountability can result in reductive and procedural interpretations of classroom assessment for learning (AfL) practices. Set in a New Zealand AfL professional development context, this research takes an…

  3. The Substitution Augmentation Modification Redefinition (SAMR) Model: A Critical Review and Suggestions for Its Use

    ERIC Educational Resources Information Center

    Hamilton, Erica R.; Rosenberg, Joshua M.; Akcaoglu, Mete

    2016-01-01

    The Substitution, Augmentation, Modification, and Redefinition (SAMR) model is a four-level, taxonomy-based approach for selecting, using, and evaluating technology in K-12 settings (Puentedura 2006). Despite its increasing popularity among practitioners, the SAMR model is not currently represented in the extant literature. To focus the ongoing…

  4. Ecological Approaches to Transition Planning for Students with Autism and Asperger's Syndrome

    ERIC Educational Resources Information Center

    Dente, Claire L.; Parkinson Coles, Kallie

    2012-01-01

    This article presents a compelling case for the increased role of social workers in work with individuals with autism and Asperger's syndrome in secondary school settings, specifically in transition planning for postsecondary educational pursuits. Social work education prepares social workers to address micro, mezzo, and macro levels of practice…

  5. Clinical Report: Helping Habitual Smokers Using Flooding and Hypnotic Desensitization Technique.

    ERIC Educational Resources Information Center

    Powell, Douglas H.

    Most research in smoking cessation has shown no intervention clearly superior or successful. Of those who return to smoking after abstaining, a subgroup includes those who do so incrementally, eventually reaching their former level. An approach aimed at this subgroup, originally used in a group setting, involves intensifying the desire to smoke…

  6. Building Relevant Leaders: Identifying the Development Needs of the Modern Construction Leader

    ERIC Educational Resources Information Center

    Shands, Mike

    2014-01-01

    "Do senior level construction leaders possess a common set of leadership development needs that can be addressed to better prepare them to lead and retain the Millennial workforce?" As older generations begin to retire in-mass, many organizations are reevaluating their approach to employee recruitment, performance, and retention. This…

  7. Teaching Home Economics Content Material in an Individualized Reading Skills Laboratory.

    ERIC Educational Resources Information Center

    Comerford, Linnie Sue

    Eighth grade students whose reading achievement scores fell between second and fourth grade level were given an individualized self-concept approach to reading instruction in home economics. Causes for their reading difficulties were identified as lack of interest in school, no set goals, poor attitudes, poor attendance and suspensions, and…

  8. Considering Transgender People in Education: A Gender-Complex Approach

    ERIC Educational Resources Information Center

    Rands, Kathleen E.

    2009-01-01

    Schools serve as a setting in which students come to understand gender, but transgender students (those who transgress societal gender norms) are largely left out of discussions of education. The high level of harassment that transgender students face poses sizable obstacles to school success. If the field of education is committed to equity and…

  9. Understanding Cellular Respiration in Terms of Matter & Energy within Ecosystems

    ERIC Educational Resources Information Center

    White, Joshua S.; Maskiewicz, April C.

    2014-01-01

    Using a design-based research approach, we developed a data-rich problem (DRP) set to improve student understanding of cellular respiration at the ecosystem level. The problem tasks engage students in data analysis to develop biological explanations. Several of the tasks and their implementation are described. Quantitative results suggest that…

  10. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. Inmore » contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.« less

  11. Creating windows of opportunity for policy change: Incorporating evidence into decentralized planning in Kenya.

    PubMed Central

    Ashford, Lori S.; Smith, Rhonda R.; De Souza, Roger-Mark; Fikree, Fariyal F.; Yinger, Nancy V.

    2006-01-01

    PROBLEM: Because researchers and policy-makers work in different spheres, policy decisions in the health arena are often not based on available scientific evidence. APPROACH: We describe a model that illustrates the policy process and how to work strategically to translate knowledge into policy actions. Several types of activity--agenda-setting, coalition building and policy learning--together can create a window of opportunity for policy change. LOCAL SETTING: Activities were undertaken as part of the Kenyan Ministry of Health's new decentralized planning-process. The objective was to ensure that the results of a national assessment of health services were used in the preparation of district-level health plans. RELEVANT CHANGES: Following the intervention, 70 district-level, evidence-based work plans were developed and approved by the Kenyan Ministry of Health. LESSONS LEARNED: Substantial investment and effort are needed to bring stakeholders together to work towards policy change. More in-depth evaluation of these efforts can aid understanding of how systematic approaches to policy change can be replicated elsewhere. PMID:16917657

  12. Inverse association linking serum levels of potential antioxidant vitamins with C-reactive protein levels using a novel analytical approach.

    PubMed

    Cheng, Hui G; Alshaarawy, Omayma; Cantave, Marven D; Anthony, James C

    2016-10-01

    Exposures to antioxidants (AO) are associated with levels of C-reactive protein (CRP), but the pattern of evidence is mixed, due in part to studying each potential AO, one at a time, when multiple AO exposures might affect CRP levels. By studying multiple AO via a composite indicator approach, we estimate the degree to which serum CRP level is associated with serum AO level. Standardised field survey protocols for the US National Health and Nutrition Examination Survey (NHANES) 2003-2006 yielded nationally representative cross-sectional samples of adults aged 20 years and older (n 8841). NHANES latex-enhanced nephelometry quantified serum CRP levels. Liquid chromatography quantified serum concentrations of vitamins A, E and C and carotenoids. Using structural equations, we regressed CRP level on AO levels, and derived a summary estimate for a composite of these potential antioxidants (CPA), with covariates held constant. The association linking CPA with CRP was inverse, stronger for slightly elevated CRP (1·8≤CRP<10 mg/l; slope= -1·08; 95 % CI -1·39, -0·77) and weaker for highly elevated CRP (≥10 mg/l; slope= -0·52; 95 % CI -0·68, -0·35), with little change when covariates were added. Vitamins A and C, as well as lutein+zeaxanthin, were prominent contributors to the composite. In these cross-sectional data studied via a composite indicator approach, the CPA level and the CRP level were inversely related. The stage is set for more confirmatory longitudinal or intervention research on multiple vitamins. The composite indicator approach might be most useful in epidemiology when several exposure constructs are too weakly inter-correlated to be studied via formal measurement models for underlying latent dimensions.

  13. Applying network theory to animal movements to identify properties of landscape space use.

    PubMed

    Bastille-Rousseau, Guillaume; Douglas-Hamilton, Iain; Blake, Stephen; Northrup, Joseph M; Wittemyer, George

    2018-04-01

    Network (graph) theory is a popular analytical framework to characterize the structure and dynamics among discrete objects and is particularly effective at identifying critical hubs and patterns of connectivity. The identification of such attributes is a fundamental objective of animal movement research, yet network theory has rarely been applied directly to animal relocation data. We develop an approach that allows the analysis of movement data using network theory by defining occupied pixels as nodes and connection among these pixels as edges. We first quantify node-level (local) metrics and graph-level (system) metrics on simulated movement trajectories to assess the ability of these metrics to pull out known properties in movement paths. We then apply our framework to empirical data from African elephants (Loxodonta africana), giant Galapagos tortoises (Chelonoidis spp.), and mule deer (Odocoileous hemionus). Our results indicate that certain node-level metrics, namely degree, weight, and betweenness, perform well in capturing local patterns of space use, such as the definition of core areas and paths used for inter-patch movement. These metrics were generally applicable across data sets, indicating their robustness to assumptions structuring analysis or strategies of movement. Other metrics capture local patterns effectively, but were sensitive to specified graph properties, indicating case specific applications. Our analysis indicates that graph-level metrics are unlikely to outperform other approaches for the categorization of general movement strategies (central place foraging, migration, nomadism). By identifying critical nodes, our approach provides a robust quantitative framework to identify local properties of space use that can be used to evaluate the effect of the loss of specific nodes on range wide connectivity. Our network approach is intuitive, and can be implemented across imperfectly sampled or large-scale data sets efficiently, providing a framework for conservationists to analyze movement data. Functions created for the analyses are available within the R package moveNT. © 2018 by the Ecological Society of America.

  14. Assessment of powder blend uniformity: Comparison of real-time NIR blend monitoring with stratified sampling in combination with HPLC and at-line NIR Chemical Imaging.

    PubMed

    Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele

    2015-11-01

    Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be attributed to differences in effective sample size and thief sampling errors. This conclusion was supported by HPLC and NIRCI analysis of tablets manufactured from powder blends after different blending times. In summary, the study clearly demonstrates the ability to develop efficient and robust quantitative calibrations for real-time NIR powder blend monitoring with a reduced set of powder blends while avoiding any bias caused by physical sampling. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Targeting zero non-attendance in healthcare clinics.

    PubMed

    Chan, Ka C; Chan, David B

    2012-01-01

    Non-attendance represents a significant cost to many health systems, resulting in inefficiency, wasted resources, poorer service delivery and lengthened waiting queues. Past studies have considered extensively the reasons for non-attendance and have generally concluded that the use of reminder systems is effective. Despite this, there will always be a certain level of non-attendance arising from unforeseeable and unpreventable circumstances, such as illness or accidents, leading to unfilled appointments. This paper reviews current approaches to the non-attendance problem, and presents a high-level approach to fill last minute appointments arising out of unforeseeable non-attendance. However, no single approach will work for all clinics and implementation of these ideas must occur at a local level. These approaches include use of social networks, such as Twitter and Facebook, as a communication tool in order to notify prospective patients when last-minute appointments become available. In addition, teleconsultation using video-conferencing technologies would be suitable for certain last-minute appointments where travel time would otherwise be inhibiting. Developments of new and innovative technologies and the increasing power of social media, means that zero non-attendance is now an achievable target. We hope that this will lead to more evidence-based evaluations from the implementation of these strategies in various settings at a local level.

  16. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded operator models that comprise MIDAS. Following the initial settings, the model progresses to begin the first tasks required of the two flight deck operators, the Captain (CA) and the First Officer (FO). The task sets will initialize operator specific settings prior to loading all of the alerts, probes, and other events that occur in the scenario. As a note, the CA and FO were terms used in developing this model but the CA can also be thought of as the Pilot Flying (PF), while the FO can be considered the Pilot-Not-Flying (PNF)or Pilot Monitoring (PM). As such, the document refers to the operators as PFCA and PNFFO respectively.

  17. Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2014-03-01

    We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.

  18. Targeted exploration and analysis of large cross-platform human transcriptomic compendia

    PubMed Central

    Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.

    2016-01-01

    We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801

  19. RUBIC identifies driver genes by detecting recurrent DNA copy number breaks

    PubMed Central

    van Dyk, Ewald; Hoogstraat, Marlous; ten Hoeve, Jelle; Reinders, Marcel J. T.; Wessels, Lodewyk F. A.

    2016-01-01

    The frequent recurrence of copy number aberrations across tumour samples is a reliable hallmark of certain cancer driver genes. However, state-of-the-art algorithms for detecting recurrent aberrations fail to detect several known drivers. In this study, we propose RUBIC, an approach that detects recurrent copy number breaks, rather than recurrently amplified or deleted regions. This change of perspective allows for a simplified approach as recursive peak splitting procedures and repeated re-estimation of the background model are avoided. Furthermore, we control the false discovery rate on the level of called regions, rather than at the probe level, as in competing algorithms. We benchmark RUBIC against GISTIC2 (a state-of-the-art approach) and RAIG (a recently proposed approach) on simulated copy number data and on three SNP6 and NGS copy number data sets from TCGA. We show that RUBIC calls more focal recurrent regions and identifies a much larger fraction of known cancer genes. PMID:27396759

  20. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.

  1. Genome-wide gene–environment interaction analysis for asbestos exposure in lung cancer susceptibility

    PubMed Central

    Wei, Qingyi Wei

    2012-01-01

    Asbestos exposure is a known risk factor for lung cancer. Although recent genome-wide association studies (GWASs) have identified some novel loci for lung cancer risk, few addressed genome-wide gene–environment interactions. To determine gene–asbestos interactions in lung cancer risk, we conducted genome-wide gene–environment interaction analyses at levels of single nucleotide polymorphisms (SNPs), genes and pathways, using our published Texas lung cancer GWAS dataset. This dataset included 317 498 SNPs from 1154 lung cancer cases and 1137 cancer-free controls. The initial SNP-level P-values for interactions between genetic variants and self-reported asbestos exposure were estimated by unconditional logistic regression models with adjustment for age, sex, smoking status and pack-years. The P-value for the most significant SNP rs13383928 was 2.17×10–6, which did not reach the genome-wide statistical significance. Using a versatile gene-based test approach, we found that the top significant gene was C7orf54, located on 7q32.1 (P = 8.90×10–5). Interestingly, most of the other significant genes were located on 11q13. When we used an improved gene-set-enrichment analysis approach, we found that the Fas signaling pathway and the antigen processing and presentation pathway were most significant (nominal P < 0.001; false discovery rate < 0.05) among 250 pathways containing 17 572 genes. We believe that our analysis is a pilot study that first describes the gene–asbestos interaction in lung cancer risk at levels of SNPs, genes and pathways. Our findings suggest that immune function regulation-related pathways may be mechanistically involved in asbestos-associated lung cancer risk. Abbreviations:CIconfidence intervalEenvironmentFDRfalse discovery rateGgeneGSEAgene-set-enrichment analysisGWASgenome-wide association studiesi-GSEAimproved gene-set-enrichment analysis approachORodds ratioSNPsingle nucleotide polymorphism PMID:22637743

  2. Modeling of local sea level rise and its future projection under climate change using regional information through EOF analysis

    NASA Astrophysics Data System (ADS)

    Naren, A.; Maity, Rajib

    2017-12-01

    Sea level rise is one of the manifestations of climate change and may cause a threat to the coastal regions. Estimates from global circulation models (GCMs) are either not available on coastal locations due to their coarse spatial resolution or not reliable since the mismatch between (interpolated) GCM estimates at coastal locations and actual observation over historical period is significantly different. We propose a semi-empirical framework to model the local sea level rise (SLR) using the possibly existing relationship between local SLR and regional atmospheric/oceanic variables. Selection of set of input variables mostly based on the literature bears the signature of both atmospheric and oceanic variables that possibly have an effect on SLR. The proposed approach offers a method to extract the combined information hidden in the regional fields of atmospheric/oceanic variables for a specific target coastal location. Generality of the approach ensures the inclusion of more variables in the set of inputs depending on the geographical location of any coastal station. For demonstration, 14 coastal locations along the Indian coast and islands are considered and a set of regional atmospheric and oceanic variables are considered. After development and validation of the model at each coastal location with the historical data, the model is further used for future projection of local SLR up to the year 2100 for three different future emission scenarios represented by representative concentration pathways (RCPs)—RCP2.6, RCP4.5, and RCP8.5. The maximum projected SLR is found to vary from 260.65 to 393.16 mm (RCP8.5) by the end of 2100 among the locations considered. Outcome of the proposed approach is expected to be useful in regional coastal management and in developing mitigation strategies in a changing climate.

  3. Texture analysis improves level set segmentation of the anterior abdominal wall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore,more » to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.« less

  4. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  5. Collaborative knowledge acquisition for the design of context-aware alert systems

    PubMed Central

    Joffe, Erel; Havakuk, Ofer; Herskovic, Jorge R; Patel, Vimla L

    2012-01-01

    Objective To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts. Materials and Methods Five internal medicine residents reviewed 18 anemia alerts, while ‘talking aloud’. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms. Results The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77). Discussion It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation. Conclusion Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules. PMID:22744961

  6. Artificial Pancreas Device Systems for the Closed-Loop Control of Type 1 Diabetes

    PubMed Central

    Trevitt, Sara; Simpson, Sue; Wood, Annette

    2015-01-01

    Background: Closed-loop artificial pancreas device (APD) systems are externally worn medical devices that are being developed to enable people with type 1 diabetes to regulate their blood glucose levels in a more automated way. The innovative concept of this emerging technology is that hands-free, continuous, glycemic control can be achieved by using digital communication technology and advanced computer algorithms. Methods: A horizon scanning review of this field was conducted using online sources of intelligence to identify systems in development. The systems were classified into subtypes according to their level of automation, the hormonal and glycemic control approaches used, and their research setting. Results: Eighteen closed-loop APD systems were identified. All were being tested in clinical trials prior to potential commercialization. Six were being studied in the home setting, 5 in outpatient settings, and 7 in inpatient settings. It is estimated that 2 systems may become commercially available in the EU by the end of 2016, 1 during 2017, and 2 more in 2018. Conclusions: There are around 18 closed-loop APD systems progressing through early stages of clinical development. Only a few of these are currently in phase 3 trials and in settings that replicate real life. PMID:26589628

  7. Reaching out towards cannabis: approach-bias in heavy cannabis users predicts changes in cannabis use

    PubMed Central

    Cousijn, Janna; Goudriaan, Anna E; Wiers, Reinout W

    2011-01-01

    Aims Repeated drug exposure can lead to an approach-bias, i.e. the relatively automatically triggered tendencies to approach rather that avoid drug-related stimuli. Our main aim was to study this approach-bias in heavy cannabis users with the newly developed cannabis Approach Avoidance Task (cannabis-AAT) and to investigate the predictive relationship between an approach-bias for cannabis-related materials and levels of cannabis use, craving, and the course of cannabis use. Design, settings and participants Cross-sectional assessment and six-month follow-up in 32 heavy cannabis users and 39 non-using controls. Measurements Approach and avoidance action-tendencies towards cannabis and neutral images were assessed with the cannabis AAT. During the AAT, participants pulled or pushed a joystick in response to image orientation. To generate additional sense of approach or avoidance, pulling the joystick increased picture size while pushing decreased it. Craving was measured pre- and post-test with the multi-factorial Marijuana Craving Questionnaire (MCQ). Cannabis use frequencies and levels of dependence were measured at baseline and after a six-month follow-up. Findings Heavy cannabis users demonstrated an approach-bias for cannabis images, as compared to controls. The approach-bias predicted changes in cannabis use at six-month follow-up. The pre-test MCQ emotionality and expectancy factor were associated negatively with the approach-bias. No effects were found on levels of cannabis dependence. Conclusions Heavy cannabis users with a strong approach-bias for cannabis are more likely to increase their cannabis use. This approach-bias could be used as a predictor of the course of cannabis use to identify individuals at risk from increasing cannabis use. PMID:21518067

  8. A study of concept-based similarity approaches for recommending program examples

    NASA Astrophysics Data System (ADS)

    Hosseini, Roya; Brusilovsky, Peter

    2017-07-01

    This paper investigates a range of concept-based example recommendation approaches that we developed to provide example-based problem-solving support in the domain of programming. The goal of these approaches is to offer students a set of most relevant remedial examples when they have trouble solving a code comprehension problem where students examine a program code to determine its output or the final value of a variable. In this paper, we use the ideas of semantic-level similarity-based linking developed in the area of intelligent hypertext to generate examples for the given problem. To determine the best-performing approach, we explored two groups of similarity approaches for selecting examples: non-structural approaches focusing on examples that are similar to the problem in terms of concept coverage and structural approaches focusing on examples that are similar to the problem by the structure of the content. We also explored the value of personalized example recommendation based on student's knowledge levels and learning goal of the exercise. The paper presents concept-based similarity approaches that we developed, explains the data collection studies and reports the result of comparative analysis. The results of our analysis showed better ranking performance of the personalized structural variant of cosine similarity approach.

  9. High Class-Imbalance in pre-miRNA Prediction: A Novel Approach Based on deepSOM.

    PubMed

    Stegmayer, Georgina; Yones, Cristian; Kamenetzky, Laura; Milone, Diego H

    2017-01-01

    The computational prediction of novel microRNA within a full genome involves identifying sequences having the highest chance of being a miRNA precursor (pre-miRNA). These sequences are usually named candidates to miRNA. The well-known pre-miRNAs are usually only a few in comparison to the hundreds of thousands of potential candidates to miRNA that have to be analyzed, which makes this task a high class-imbalance classification problem. The classical way of approaching it has been training a binary classifier in a supervised manner, using well-known pre-miRNAs as positive class and artificially defining the negative class. However, although the selection of positive labeled examples is straightforward, it is very difficult to build a set of negative examples in order to obtain a good set of training samples for a supervised method. In this work, we propose a novel and effective way of approaching this problem using machine learning, without the definition of negative examples. The proposal is based on clustering unlabeled sequences of a genome together with well-known miRNA precursors for the organism under study, which allows for the quick identification of the best candidates to miRNA as those sequences clustered with known precursors. Furthermore, we propose a deep model to overcome the problem of having very few positive class labels. They are always maintained in the deep levels as positive class while less likely pre-miRNA sequences are filtered level after level. Our approach has been compared with other methods for pre-miRNAs prediction in several species, showing effective predictivity of novel miRNAs. Additionally, we will show that our approach has a lower training time and allows for a better graphical navegability and interpretation of the results. A web-demo interface to try deepSOM is available at http://fich.unl.edu.ar/sinc/web-demo/deepsom/.

  10. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  11. Intersections of Critical Systems Thinking and Community Based Participatory Research: A Learning Organization Example with the Autistic Community

    PubMed Central

    Raymaker, Dora M

    2016-01-01

    Critical systems thinking (CST) and community based participatory research (CBPR) are distinct approaches to inquiry which share a primary commitment to holism and human emancipation, as well as common grounding in critical theory and emancipatory and pragmatic philosophy. This paper explores their intersections and complements on a historical, philosophical, and theoretical level, and then proposes a hybrid approach achieved by applying CBPR's principles and considerations for operationalizing emancipatory practice to traditional systems thinking frameworks and practices. This hybrid approach is illustrated in practice with examples drawn from of the implementation of the learning organization model in an action research setting with the Autistic community. Our experience of being able to actively attend to, and continuously equalize, power relations within an organizational framework that otherwise has great potential for reinforcing power inequity suggests CBPR's principles and considerations for operationalizing emancipatory practice could be useful in CST settings, and CST's vocabulary, methods, and clarity around systems thinking concepts could be valuable to CBPR practioners. PMID:27833398

  12. Intersections of Critical Systems Thinking and Community Based Participatory Research: A Learning Organization Example with the Autistic Community.

    PubMed

    Raymaker, Dora M

    2016-10-01

    Critical systems thinking (CST) and community based participatory research (CBPR) are distinct approaches to inquiry which share a primary commitment to holism and human emancipation, as well as common grounding in critical theory and emancipatory and pragmatic philosophy. This paper explores their intersections and complements on a historical, philosophical, and theoretical level, and then proposes a hybrid approach achieved by applying CBPR's principles and considerations for operationalizing emancipatory practice to traditional systems thinking frameworks and practices. This hybrid approach is illustrated in practice with examples drawn from of the implementation of the learning organization model in an action research setting with the Autistic community. Our experience of being able to actively attend to, and continuously equalize, power relations within an organizational framework that otherwise has great potential for reinforcing power inequity suggests CBPR's principles and considerations for operationalizing emancipatory practice could be useful in CST settings, and CST's vocabulary, methods, and clarity around systems thinking concepts could be valuable to CBPR practioners.

  13. A Qualitative Inquiry Into the Challenges of Medical Education for Retention of General Practitioners in Rural and Underserved Areas of Iran

    PubMed Central

    2016-01-01

    Objectives General practitioners (GPs) retention in rural and underserved areas highly effects on accessibility of healthcare facilities across the country. Education seems to be a critical factor that affects GPs retention. Thus, the present study aimed at inquiry into medical education challenges that limit their retention in rural and underserved areas. Methods A qualitative approach was applied for the aim of this study. Data were gathered via 28 semi-structured interviews with experts at different levels of Iran’s health system as well as GPs who retained and refused to retain working in rural settings. Interviews mainly were performed face-to-face and in some cases via telephone during 2015 and then coded and analyzed using content analysis approach. Results Iran’s medical education is faced with several challenges that were categorized in four main themes including student selection, medical students’ perception about their field of study, education setting and approach, curriculum of medical education. According to experts this challenges could results in making GP graduates disinterested for practicing in rural and underserved areas. Conclusions Challenges that were found could have negative effects on retention. Modification in student’s perception about rural practice could be done via changing education setting and approach and curriculum. These modifications could improve GPs retention in rural and underserved areas. PMID:27951631

  14. Automatic Cell Segmentation Using a Shape-Classification Model in Immunohistochemically Stained Cytological Images

    NASA Astrophysics Data System (ADS)

    Shah, Shishir

    This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.

  15. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  16. Writing and reading: connections between language by hand and language by eye.

    PubMed

    Berninger, Virginia W; Abbott, Robert D; Abbott, Sylvia P; Graham, Steve; Richards, Todd

    2002-01-01

    Four approaches to the investigation of connections between language by hand and language by eye are described and illustrated with studies from a decade-long research program. In the first approach, multigroup structural equation modeling is applied to reading and writing measures given to typically developing writers to examine unidirectional and bidirectional relationships between specific components of the reading and writing systems. In the second approach, structural equation modeling is applied to a multivariate set of language measures given to children and adults with reading and writing disabilities to examine how the same set of language processes is orchestrated differently to accomplish specific reading or writing goals, and correlations between factors are evaluated to examine the level at which the language-by-hand system and the language-by-eye system communicate most easily. In the third approach, mode of instruction and mode of response are systematically varied in evaluating effectiveness of treating reading disability with and without a writing component. In the fourth approach, functional brain imaging is used to investigate residual spelling problems in students whose problems with word decoding have been remediated. The four approaches support a model in which language by hand and language by eye are separate systems that interact in predictable ways.

  17. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  18. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  19. Using Experience-based Co-design with older patients, their families and staff to improve palliative care experiences in the Emergency Department: A reflective critique on the process and outcomes.

    PubMed

    Blackwell, Rebecca Wright Née; Lowton, Karen; Robert, Glenn; Grudzen, Corita; Grocott, Patricia

    2017-03-01

    Increasing use of emergency departments among older patients with palliative needs has led to the development of several service-level interventions intended to improve care quality. There is little evidence of patient and family involvement in developmental processes, and little is known about the experiences of - and preferences for - palliative care delivery in this setting. Participatory action research seeking to enable collaborative working between patients and staff should enhance the impact of local quality improvement work but has not been widely implemented in such a complex setting. To critique the feasibility of this methodology as a quality improvement intervention in complex healthcare settings, laying a foundation for future work. an Emergency Department in a large teaching hospital in the United Kingdom. Experience-based Co-design incorporating: 150h of nonparticipant observation; semi-structured interviews with 15 staff members about their experiences of palliative care delivery; 5 focus groups with 64 staff members to explore challenges in delivering palliative care; 10 filmed semi-structured interviews with palliative care patients or their family members; a co-design event involving staff, patients and family members. the study successfully identified quality improvement priorities leading to changes in Emergency Department-palliative care processes. Further outputs were the creation of a patient-family-staff experience training DVD to encourage reflective discussion and the identification and application of generic design principles for improving palliative care in the Emergency Department. There were benefits and challenges associated with using Experience-based Co-design in this setting. Benefits included the flexibility of the approach, the high levels of engagement and responsiveness of patients, families and staff, and the impact of using filmed narrative interviews to enhance the 'voice' of seldom heard patients and families. Challenges included high levels of staff turnover during the 19 month project, significant time constraints in the Emergency Department and the ability of older patients and their families to fully participate in the co-design process. Experience-based Co-design is a useful approach for encouraging collaborative working between vulnerable patients, family and staff in complex healthcare environments. The flexibility of the approach allows the specific needs of participants to be accounted for, enabling fuller engagement with those who typically may not be invited to contribute to quality improvement work. Recommendations for future studies in this and similar settings include testing the 'accelerated' form of the approach and experimenting with alternative ways of increasing involvement of patients/families in the co-design phase. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. String Scale Gauge Coupling Unification with Vector-Like Exotics and Noncanonical U(1)Y Normalization

    NASA Astrophysics Data System (ADS)

    Barger, V.; Jiang, Jing; Langacker, Paul; Li, Tianjun

    We use a new approach to study string scale gauge coupling unification systematically, allowing both the possibility of noncanonical U(1)Y normalization and the existence of vector-like particles whose quantum numbers are the same as those of the Standard Model (SM) fermions and their Hermitian conjugates and the SM adjoint particles. We first give all the independent sets (Yi) of particles that can be employed to achieve SU(3)C and SU(2)L string scale gauge coupling unification and calculate their masses. Second, for a noncanonical U(1)Y normalization, we obtain string scale SU(3)C ×SU(2)L ×U(1)Y gauge coupling unification by choosing suitable U(1)Y normalizations for each of the Yi sets. Alternatively, for the canonical U(1)Y normalization, we achieve string scale gauge coupling unification by considering suitable combinations of the Yi sets or by introducing additional independent sets (Zi), that do not affect the SU(3)C ×SU(2)L unification at tree level, and then choosing suitable combinations, one from the Yi sets and one from the Zi sets. We also briefly discuss string scale gauge coupling unification in models with higher Kac-Moody levels for SU(2)L or SU(3)C.

  1. Setting the question for inquiry: The effects of whole class vs small group on student achievement in elementary science

    NASA Astrophysics Data System (ADS)

    Cavagnetto, Andy Roy

    This study was conducted to determine the effects of two different student-centered approaches to setting the question for inquiry. The first approach (whole class) consisted of students setting a single question for inquiry after which students worked in small groups during an investigation phase of the activity with all groups exploring the same question. The second approach (small group) consisted of each group of students setting a question resulting in numerous questions being explored per class. A mixed method quasi-experimental design was utilized. Two grade five teachers from a small rural school district in the Midwestern United States participated, each teaching two sections of science (approximately 25 students per section). Results indicate three major findings. Instructional approach (whole class vs. small group) did not effect student achievement in science or language arts. Observational data indicated the actions and skills teachers utilized to implement the approaches were similar. Specifically, the pedagogical skills of dialogical interaction (which was found to be influenced by teacher level of control of learning and teacher content knowledge) and effective rather than efficient use of time were identified as key factors in teachers' progression toward a student-centered, teacher-managed instructional approach. Unit exams along with qualitative and quantitative teacher observation data indicated that these factors do have an impact on student achievement. Specifically increased dialogical interaction in the forms of greater student voice, and increased cognitive demands placed on students by embedding and emphasizing science argument within the student inquiry corresponded to positive gains in student achievement. Additionally, teacher's perception of student abilities was also found to influence professional growth. Finally, allowing students to set the questions for inquiry and design the experiments impact the classroom environment as teacher talk changed from giving directions toward scaffolding student thought. These results have implications for professional development and teacher education as they suggest that more time should be spent on challenging teachers to align their pedagogy with how students learn rather than simply providing strategies and lesson plans for teachers to use in the classrooms.

  2. A coupled PFEM-Eulerian approach for the solution of porous FSI problems

    NASA Astrophysics Data System (ADS)

    Larese, A.; Rossi, R.; Oñate, E.; Idelsohn, S. R.

    2012-12-01

    This paper aims to present a coupled solution strategy for the problem of seepage through a rockfill dam taking into account the free-surface flow within the solid as well as in its vicinity. A combination of a Lagrangian model for the structural behavior and an Eulerian approach for the fluid is used. The particle finite element method is adopted for the evaluation of the structural response, whereas an Eulerian fixed-mesh approach is employed for the fluid. The free surface is tracked by the use of a level set technique. The numerical results are validated with experiments on scale models rockfill dams.

  3. Diagnosis and Management of Acute Concussion.

    PubMed

    McCrea, Michael A; Nelson, Lindsay D; Guskiewicz, Kevin

    2017-05-01

    Over the past 2 decades, there have been major advances in the basic and clinical science of concussion and mild traumatic brain injury. These advances now provide a more evidence-informed approach to the definition, diagnosis, assessment, and management of acute concussion. Standardized clinical tools have been developed and validated for assessment of acute concussion across injury settings (eg, civilian, sport, military). Consensus guidelines now provide guidance regarding injury management and approaches to ensure safe return to activity after acute concussion. This article provides a brief, high-level overview of approaches to best practice in diagnosis, assessment, and management of acute concussion. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  5. Innovative visualization and segmentation approaches for telemedicine

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2014-09-01

    In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.

  6. Unsupervised hierarchical partitioning of hyperspectral images: application to marine algae identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.

    2015-10-01

    In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.

  7. Analyzing the requirements for a robust security criteria and management of multi-level security in the clouds

    NASA Astrophysics Data System (ADS)

    Farroha, Bassam S.; Farroha, Deborah L.

    2011-06-01

    The new corporate approach to efficient processing and storage is migrating from in-house service-center services to the newly coined approach of Cloud Computing. This approach advocates thin clients and providing services by the service provider over time-shared resources. The concept is not new, however the implementation approach presents a strategic shift in the way organizations provision and manage their IT resources. The requirements on some of the data sets targeted to be run on the cloud vary depending on the data type, originator, user, and confidentiality level. Additionally, the systems that fuse such data would have to deal with the classifying the product and clearing the computing resources prior to allowing new application to be executed. This indicates that we could end up with a multi-level security system that needs to follow specific rules and can send the output to a protected network and systems in order not to have data spill or contaminated resources. The paper discusses these requirements and potential impact on the cloud architecture. Additionally, the paper discusses the unexpected advantages of the cloud framework providing a sophisticated environment for information sharing and data mining.

  8. Sentence-Level Attachment Prediction

    NASA Astrophysics Data System (ADS)

    Albakour, M.-Dyaa; Kruschwitz, Udo; Lucas, Simon

    Attachment prediction is the task of automatically identifying email messages that should contain an attachment. This can be useful to tackle the problem of sending out emails but forgetting to include the relevant attachment (something that happens all too often). A common Information Retrieval (IR) approach in analyzing documents such as emails is to treat the entire document as a bag of words. Here we propose a finer-grained analysis to address the problem. We aim at identifying individual sentences within an email that refer to an attachment. If we detect any such sentence, we predict that the email should have an attachment. Using part of the Enron corpus for evaluation we find that our finer-grained approach outperforms previously reported document-level attachment prediction in similar evaluation settings.

  9. A methodological approach to identify agro-biodiversity hotspots for priority in situ conservation of plant genetic resources

    PubMed Central

    Pacicco, Luca; Bodesmo, Mara; Torricelli, Renzo

    2018-01-01

    Agro-biodiversity is seriously threatened worldwide and strategies to preserve it are dramatically required. We propose here a methodological approach aimed to identify areas with a high level of agro-biodiversity in which to set or enhance in situ conservation of plant genetic resources. These areas are identified using three criteria: Presence of Landrace diversity, Presence of wild species and Agro-ecosystem ecological diversity. A Restrictive and an Additive prioritization strategy has been applied on the entire Italian territory and has resulted in establishing nationwide 53 and 197 agro-biodiversity hotspots respectively. At present the strategies can easily be applied at a European level and can be helpful to develop conservation strategies everywhere. PMID:29856765

  10. Extrapolation of earth-based solar irradiance measurements to exoatmospheric levels for broad-band and selected absorption-band observations

    NASA Technical Reports Server (NTRS)

    Reagan, John A.; Pilewskie, Peter A.; Scott-Fleming, Ian C.; Herman, Benjamin M.; Ben-David, Avishai

    1987-01-01

    Techniques for extrapolating earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.

  11. Extrapolation of Earth-based solar irradiance measurements to exoatmospheric levels for broad-band and selected absorption-band observations

    NASA Technical Reports Server (NTRS)

    Reagan, J. A.; Pilewskie, P. A.; Scott-Fleming, I. C.; Hermann, B. M.

    1986-01-01

    Techniques for extrapolating Earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor system being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.

  12. Bayesian Dose-Response Modeling in Sparse Data

    NASA Astrophysics Data System (ADS)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.

  13. A Positive Behavioral Approach for Aggression in Forensic Psychiatric Settings.

    PubMed

    Tolisano, Peter; Sondik, Tracey M; Dike, Charles C

    2017-03-01

    Aggression toward self and others by complex patients admitted to forensic psychiatric settings is a relatively common yet extremely difficult behavior to treat. Traditional interventions in forensic inpatient settings have historically emphasized control and management over treatment. Research over the past several years has demonstrated the value of behavioral and psychosocial treatment interventions to reduce aggression and to increase prosocial skill development in inpatient forensic population. Positive behavioral support (PBS) offers a comprehensive approach that incorporates the science of applied behavioral analysis (ABA) in support of patients with challenging behaviors, including aggression and violence. In this article, we describe a PBS model to treat aggression in forensic settings. PBS includes a comprehensive functional assessment, along with four basic elements: ecological strategies, positive programming, focused support strategies, and reactive strategies. Other key components are described, including data collection, staff training, fidelity checks to ensure correct implementation of the plan, and ongoing monitoring and revision of PBS strategies, according to treatment outcomes. Finally, a behavioral consultation team approach within the inpatient forensic setting is recommended, led by an assigned doctoral-level psychologist with specialized knowledge and training in behavioral methods. The behavioral consultation team works directly with the unit treatment team and the identified patient to develop, implement, and track a plan that may extend over several weeks to several months including transition into the community. PBS can offer a positive systemic impact in forensic inpatient settings, such as providing a nonpharmacologic means to address aggression, reducing the incidences of restraint and seclusion, enhancing staff proficiency in managing challenging patient presentations, and reducing recidivism when used as part of the bridge to community re-entry. © 2017 American Academy of Psychiatry and the Law.

  14. Effects of training set selection on pain recognition via facial expressions

    NASA Astrophysics Data System (ADS)

    Shier, Warren A.; Yanushkevich, Svetlana N.

    2016-07-01

    This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.

  15. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.

  16. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  17. Nourishing networks: an interprofessional learning model and its application to the Australian rural health workforce.

    PubMed

    Little, F; Brown, L; Grotowski, M; Harris, D

    2012-01-01

    Access to continuing professional development for rural health clinicians requires strategies to overcome barriers associated with finances, travel and a lack of resources. Approaches to providing professional development need to transcend conventional educational methods and consider interprofessional educational opportunities to meet the diverse needs of the rural health workforce. Rural clinicians often work in professional isolation and frequently work collaboratively with clinicians from a range of other health disciplines. Interprofessional learning and practice is therefore important in a rural areas as clinicians working in these settings are often more reliant on each other and require an understanding of other's roles to provide effective health care. In addition, specialist services are limited in rural areas, with health professionals increasingly required to perform extended roles at an advanced-practice level. A model for interprofessional learning has been developed to attempt to address the barriers related to the delivery of interprofessional education in the rural health setting in Australia. This model demonstrates a flexible approach to interprofessional learning which meets different educational needs across a number of health disciplines, and is tailored to varying levels of expertise. It incorporates three learning approaches: traditional learning, flexible learning and advanced practice. Each of these components of the model are described and the Nourishing Networks program is provided as an example of the application of the model in a rural setting, utilising 'eating disorders' as the educational topic. Interprofessional learning can be delivered effectively in a rural setting by utilising technology to help bridge the isolation experienced in rural practice. Challenges in delivering the interprofessional learning program included: engaging rural general practitioners, utilising technology and maintaining participant engagement. The use of technology is essential to access a broad group of rural clinicians however, there are limitations in its use that must be acknowledged. The pilot of the Stepped Interprofessional Rural Learning Model and its application to eating disorders has scope for use in delivering education for other health topics.

  18. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    NASA Astrophysics Data System (ADS)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  19. Good on paper: the gap between programme theory and real-world context in Pakistan's Community Midwife programme

    PubMed Central

    Mumtaz, Z; Levay, A; Bhatti, A; Salway, S

    2015-01-01

    Objective To understand why skilled birth attendance—an acknowledged strategy for reducing maternal deaths—has been effective in some settings but is failing in Pakistan and to demonstrate the value of a theory-driven approach to evaluating implementation of maternal healthcare interventions. Design Implementation research was conducted using an institutional ethnographic approach. Setting and population National programme and local community levels in Pakistan. Methods Observations, focus group discussions, and in-depth interviews were conducted with 38 Community Midwives (CMWs), 20 policymakers, 45 healthcare providers and 136 community members. A critical policy document review was conducted. National and local level data were brought together. Main outcomes Alignment of programme theory with real-world practice. Results Data revealed gaps between programme theory, assumptions and reality on the ground. The design of the programme failed to take into account: (1) the incongruity between the role of a midwife and dominant class and gendered norms that devalue such a role; (2) market and consumer behaviour that prevented CMWs from establishing private practices; (3) the complexity of public–private sector cooperation. Uniform deployment policies failed to consider existing provider density and geography. Conclusions Greater attention to programme theory and the ‘real-world’ setting during design of maternal health strategies is needed to achieve consistent results in different contexts. PMID:25315837

  20. Comparison of display enhancement with intelligent decision-aiding

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex; Markert, Wendy J.; Kossack, Merrick

    1992-01-01

    Currently, two main approaches exist for improving the human-machine interface component of a system in order to improve overall system performance, display enhancement and intelligent decision aiding. Each of these two approaches has its own set of advantages and disadvantages, as well as introduce its own set of additional performance problems. These characteristics should help identify which types of problem situations and domains are better aided by which type of strategy. The characteristic issues are described of these two decision aiding strategies. Then differences in expert and novice decision making are described in order to help determine whether a particular strategy may be better for a particular type of user. Finally, research is outlined to compare and contrast the two technologies, as well as to examine the interaction effects introduced by the different skill levels and the different methods for training operators.

  1. Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery

    NASA Astrophysics Data System (ADS)

    Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.

    2017-05-01

    In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.

  2. Juvenile justice, delinquency, and psychiatry.

    PubMed

    Steiner, H; Cauffman, E

    1998-07-01

    Juvenile delinquency is a serious problem in the United States, and is likely to remain so for many years to come. Because delinquency often is accompanied by high rates of coincidental and causal comorbidities, effective treatment programs for CD youths must use multimodal approaches tailored to each youth's particular set of psychopathologies. Child psychiatrists are uniquely qualified to provide assistance and leadership in the treatment of delinquent youths, as their problems are best captured by a developmental psychopathology model. Knowledge regarding the epidemiology of CD, and of the risk factors associated with it, is excellent, although a thorough understanding of resilience and treatment is still developing. Successful involvement in the treatment of delinquent youth requires that psychiatrists maintain a diverse set of skills, a high level of flexibility in treatment approaches, and a special awareness of the legal parameters governing the rehabilitation of these youngsters.

  3. Using clinical simulation centers to test design interventions: a pilot study of lighting and color modifications.

    PubMed

    Gray, Whitney Austin; Kesten, Karen S; Hurst, Stephen; Day, Tama Duffy; Anderko, Laura

    2012-01-01

    The aim of this pilot study was to test design interventions such as lighting, color, and spatial color patterning on nurses' stress, alertness, and satisfaction, and to provide an example of how clinical simulation centers can be used to conduct research. The application of evidence-based design research in healthcare settings requires a transdisciplinary approach. Integrating approaches from multiple fields in real-life settings often proves time consuming and experimentally difficult. However, forums for collaboration such as clinical simulation centers may offer a solution. In these settings, identical operating and patient rooms are used to deliver simulated patient care scenarios using automated mannequins. Two identical rooms were modified in the clinical simulation center. Nurses spent 30 minutes in each room performing simulated cardiac resuscitation. Subjective measures of nurses' stress, alertness, and satisfaction were collected and compared between settings and across time using matched-pair t-test analysis. Nurses reported feeling less stressed after exposure to the experimental room than nurses who were exposed to the control room (2.22, p = .03). Scores post-session indicated a significant reduction in stress and an increase in alertness after exposure to the experimental room as compared to the control room, with significance levels below .10. (Change in stress scores: 3.44, p = .069); (change in alertness scores: 3.6, p = .071). This study reinforces the use of validated survey tools to measure stress, alertness, and satisfaction. Results support human-centered design approaches by evaluating the effect on nurses in an experimental setting.

  4. A New Structure-Activity Relationship (SAR) Model for Predicting Drug-Induced Liver Injury, Based on Statistical and Expert-Based Structural Alerts

    PubMed Central

    Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio

    2016-01-01

    The prompt identification of chemical molecules with potential effects on liver may help in drug discovery and in raising the levels of protection for human health. Besides in vitro approaches, computational methods in toxicology are drawing attention. We built a structure-activity relationship (SAR) model for evaluating hepatotoxicity. After compiling a data set of 950 compounds using data from the literature, we randomly split it into training (80%) and test sets (20%). We also compiled an external validation set (101 compounds) for evaluating the performance of the model. To extract structural alerts (SAs) related to hepatotoxicity and non-hepatotoxicity we used SARpy, a statistical application that automatically identifies and extracts chemical fragments related to a specific activity. We also applied the chemical grouping approach for manually identifying other SAs. We calculated accuracy, specificity, sensitivity and Matthews correlation coefficient (MCC) on the training, test and external validation sets. Considering the complexity of the endpoint, the model performed well. In the training, test and external validation sets the accuracy was respectively 81, 63, and 68%, specificity 89, 33, and 33%, sensitivity 93, 88, and 80% and MCC 0.63, 0.27, and 0.13. Since it is preferable to overestimate hepatotoxicity rather than not to recognize unsafe compounds, the model's architecture followed a conservative approach. As it was built using human data, it might be applied without any need for extrapolation from other species. This model will be freely available in the VEGA platform. PMID:27920722

  5. PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets

    PubMed Central

    Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.

    2016-01-01

    Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601

  6. PCAN: Probabilistic correlation analysis of two non-normal data sets.

    PubMed

    Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J

    2016-12-01

    Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.

  7. The Performance Effects of an Ability-Based Approach to Goal Assignment

    ERIC Educational Resources Information Center

    Jeffrey, Scott A.; Schulz, Axel; Webb, Alan

    2012-01-01

    Some organizations have begun to target their goal-setting method more closely to the ability levels of their employees. In this article, we report the results of a laboratory study of 138 undergraduate students, which shows that these "ability-based" goals are more effective at improving performance than a "one goal for all"…

  8. Managing Highway Maintenance: Standards for Maintenance Work, Part 2, Unit 8, Level 2.

    ERIC Educational Resources Information Center

    Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.

    Part of the series "Managing Highway Maintenance," the unit describes the ways maintenance standards are developed and some of the factors which are considered in setting standards; the preceding unit on standards (part 1) should be completed before reading this unit. The format is a programed, self-instruction approach in which…

  9. Toward a System of Total Quality Management: Applying the Deming Approach to the Education Setting.

    ERIC Educational Resources Information Center

    McLeod, Willis B.; And Others

    1992-01-01

    Recently, the Petersburg (Virginia) Public Schools have moved away from a highly centralized organizational structure to a Total Quality Management system featuring shared decision making and school-based management practices. The district was guided by Deming's philosophy that all stakeholders should be involved in defining the level of products…

  10. Translational Genomics Research Institute: Quantified Cancer Cell Line Encyclopedia (CCLE) RNA-seq Data | Office of Cancer Genomics

    Cancer.gov

    Many applications analyze quantified transcript-level abundances to make inferences.  Having completed this computation across the large sample set, the CTD2 Center at the Translational Genomics Research Institute presents the quantified data in a straightforward, consolidated form for these types of analyses.   Experimental Approaches  

  11. SyRoTek--Distance Teaching of Mobile Robotics

    ERIC Educational Resources Information Center

    Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.

    2013-01-01

    E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…

  12. Early Language Learning for All: Examination of a Prekindergarten French Program in an Inclusion Setting

    ERIC Educational Resources Information Center

    Regalla, Michele; Peker, Hilal

    2015-01-01

    This preliminary study examined a prekindergarten multimodal French program conducted for students in an inclusion charter school. Due to the age and varied ability levels of the students, media such as video and songs combined with kinesthetic activities served as the primary instructional approach. Data on children's ability to understand and…

  13. Assessing the economic approaches to climate-forest policies: a critical survey

    Treesearch

    Grace Y. Wong; R. Janaki R.

    2002-01-01

    The linkage between global climate change and forests have assumed political prominence as forest sinks are now acknowledged as a means for off-setting carbon dioxide (CO2) emissions under the Kyoto Protocol targets. As such, policies to stimulate forest carbon sequestration in an open economy will require varying levels of economic information...

  14. NREL to Research Revolutionary Battery Storage Approaches in Support of

    Science.gov Websites

    adoption by dramatically improving driving range and reliability, and by providing low-cost carbon have the potential to meet the demanding safety, cost and performance levels for EVs set by ARPA-E, but materials to develop a new low-cost battery that operates similar to a flow battery, where chemical energy

  15. Escaping from Sunday School: Teaching "The Middle East" in the Setting of Religion

    ERIC Educational Resources Information Center

    Marten, Michael

    2011-01-01

    The article argues that, in the teaching of religion at undergraduate level, many students approach understanding the historical or contemporary Middle East in ways that are coloured by what they think is biblical knowledge or basic Christian beliefs. This is less noticeable for students in disciplines such as history or politics. Many history or…

  16. An Investigation of Data Privacy and Utility Using Machine Learning as a Gauge

    ERIC Educational Resources Information Center

    Mivule, Kato

    2014-01-01

    The purpose of this investigation is to study and pursue a user-defined approach in preserving data privacy while maintaining an acceptable level of data utility using machine learning classification techniques as a gauge in the generation of synthetic data sets. This dissertation will deal with data privacy, data utility, machine learning…

  17. Resources for Responding to Doomsday 2012: An Annotated Guide

    ERIC Educational Resources Information Center

    Fraknoi, Andrew

    2012-01-01

    Educators at all levels and in all settings are getting questions these days about the approaching "end of the world" catastrophes supposedly coming in December 2012. This resource guide provides a selection of useful resources for responding to student and public questions in this arena. The latest internet myth to gain traction is the notion…

  18. A Conceptual Physical Education Course and College Freshmen's Health-Related Fitness

    ERIC Educational Resources Information Center

    Liu, Jingwen; Shangguan, Rulan; Keating, Xiaofen D.; Leitner, Jessica; Wu, Yigang

    2017-01-01

    Purpose: Conceptual physical education (CPE) classes have been widely offered to promote a healthy lifestyle in higher education settings. The purpose of this paper is to examine the effects of a CPE course on health-related fitness (HRF) levels among college freshmen. Design/methodology/approach: A pre- and post-test research design was used. In…

  19. An Evaluation of the Olweus Bullying Prevention Program's Effectiveness in a High School Setting

    ERIC Educational Resources Information Center

    Losey, Raymond Alan

    2009-01-01

    An ecological approach to bullying prevention is critical for the reduction of bullying and victimization. Any intervention implemented in a school to reduce bullying should include a variety of targets on all levels of the ecology and these interventions need to be sustainable by the school following introduction of the intervention. Schools are…

  20. "Active Science": Integrating Physical Activity and Science Learning into the Afterschool Environment

    ERIC Educational Resources Information Center

    Finn, Kevin E.; Yan, Zi; McInnis, Kyle J.

    2015-01-01

    Background: Afterschool programs offer significant opportunities to increase physical activity levels and improve academic performance of children. Purpose: This study assessed an innovative approach to embed physical activity into science lessons in an afterschool community setting. Methods: Participants were 47 boys and girls (age = 10.8 ± 0.7…

  1. Critical Thinking Skills and Academic Maturity: Emerging Results from a Five-Year Quality Enhancement Plan (QEP) Study

    ERIC Educational Resources Information Center

    Toppin, Ian N.; Chitsonga, Shadreck

    2016-01-01

    The QEP that was implemented in this study focused on enhancing students' critical thinking skills. A pretest/posttest approach was used to assess students' critical thinking progress in freshman level core English and Math courses. An intervention was performed involving intensive instruction and assignments relating to a set of reasoning…

  2. Connecting Science to Everyday Experiences in Preschool Settings

    ERIC Educational Resources Information Center

    Roychoudhury, Anita

    2014-01-01

    In this paper I discuss the challenges of teaching science concepts and discourse in preschool in light of the study conducted by Kristina Andersson and Annica Gullberg. I then suggest a complementary approach to teaching science at this level from the perspective of social construction of knowledge based on Vygotsky's theory (1934/1987). In…

  3. Understanding the Diffusion of Non-Evidence-Based Health Interventions: The Role of Experiential Evidence

    ERIC Educational Resources Information Center

    Evans, Rhiannon; Murphy, Simon; Scourfield, Jonathan; Turley, Ruth

    2017-01-01

    Objective: The utilisation of evidence-based health interventions remains a challenge in educational settings. Although driving forward the scientific evidence-base may contribute to the diffusion of such approaches, abstract notions of population-level impact may not be seen as priorities in local. This paper considers the alternative forms of…

  4. Occupancy in community-level studies

    USGS Publications Warehouse

    MacKenzie, Darryl I.; Nichols, James; Royle, Andy; Pollock, Kenneth H.; Bailey, Larissa L.; Hines, James

    2018-01-01

    Another type of multi-species studies, are those focused on community-level metrics such as species richness. In this chapter we detail how some of the single-species occupancy models described in earlier chapters have been applied, or extended, for use in such studies, while accounting for imperfect detection. We highlight how Bayesian methods using MCMC are particularly useful in such settings to easily calculate relevant community-level summaries based on presence/absence data. These modeling approaches can be used to assess richness at a single point in time, or to investigate changes in the species pool over time.

  5. Results of acoustic testing of the JT8D-109 refan engines

    NASA Technical Reports Server (NTRS)

    Burdsall, E. A.; Brochu, F. P.; Scaramella, V. M.

    1975-01-01

    A JT8D engine was modified to reduce jet noise levels by 6-8 PNdB at takeoff power without increasing fan generated noise levels. Designated the JT8D-109, the modified engines featured a larger single stage fan, and acoustic treatment in the fan discharge ducts. Noise levels were measured on an outdoor test facility for eight engine/acoustic treatment configurations. Compared to the baseline JT8D, the fully treated JT8D-109 showed reductions of 6 PNdB at takeoff, and 11 PNdB at a typical approach power setting.

  6. Using Design Capability Indices to Satisfy Ranged Sets of Design Requirements

    NASA Technical Reports Server (NTRS)

    Chen, Wei; Allen, Janet K.; Simpson, Timothy W.; Mistree, Farrokh

    1996-01-01

    For robust design it is desirable to allow the design requirements to vary within a certain range rather than setting point targets. This is particularly important during the early stages of design when little is known about the system and its requirements. Toward this end, design capability indices are developed in this paper to assess the capability of a family of designs, represented by a range of top-level design specifications, to satisfy a ranged set of design requirements. Design capability indices are based on process capability indices from statistical process control and provide a single objective, alternate approach to the use of Taguchi's signal-to- noise ratio which is often used for robust design. Successful implementation of design capability indices ensures that a family of designs conforms to a given ranged set of design requirements. To demonstrate an application and the usefulness of design capability indices, the design of a solar powered irrigation system is presented. Our focus in this paper is on the development and implementation of design capability indices as an alternate approach to the use of the signal-to-noise ratio and not on the results of the example problem, per se.

  7. LUXSim: A component-centric approach to low-background simulations

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2012-02-13

    Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less

  8. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  9. Space-time latent component modeling of geo-referenced health data.

    PubMed

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  10. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  11. A System of Systems Approach to Integrating Global Sea Level Change Application Programs

    NASA Astrophysics Data System (ADS)

    Bambachus, M. J.; Foster, R. S.; Powell, C.; Cole, M.

    2005-12-01

    The global sea level change application community has numerous disparate models used to make predications over various regional and temporal scales. These models have typically been focused on limited sets of data and optimized for specific areas or questions of interest. Increasingly, decision makers at the national, international, and local/regional levels require access to these application data models and want to be able to integrate large disparate data sets, with new ubiquitous sensor data, and use these data across models from multiple sources. These requirements will force the Global Sea Level Change application community to take a new system-of-systems approach to their programs. We present a new technical architecture approach to the global sea level change program that provides external access to the vast stores of global sea level change data, provides a collaboration forum for the discussion and visualization of data, and provides a simulation environment to evaluate decisions. This architectural approach will provide the tools to support multi-disciplinary decision making. A conceptual system of systems approach is needed to address questions around the multiple approaches to tracking and predicting Sea Level Change. A systems of systems approach would include (1) a forum of data providers, modelers, and users, (2) a service oriented architecture including interoperable web services with a backbone of Grid computing capability, and (3) discovery and access functionality to the information developed through this structure. Each of these three areas would be clearly designed to maximize communication, data use for decision making and flexibility and extensibility for evolution of technology and requirements. In contemplating a system-of-systems approach, it is important to highlight common understanding and coordination as foundational to success across the multiple systems. The workflow of science in different applications is often conceptually similar but different in the details. These differences can discourage the potential for collaboration. Resources that are not inherently shared (or do not spring from a common authority) must be explicitly coordinated to avoid disrupting the collaborative research workflow. This includes tools which make the interaction of systems (and users with systems, and administrators of systems) more conceptual and higher-level than is typically done today. Such tools all appear under the heading of Grid, within a larger idea of metacomputing. We present an approach for successful collaboration and shared use of distributed research resources. The real advances in research throughput that are occurring through the use of large computers are occurring less as a function of progress in a given discrete algorithm and much more as a function of model and data coupling. Complexity normally reduces the ability of the human mind to understand and work with this kind of coupling. Intuitive Grid-based computational resources simultaneously reduce the effect of this complexity on the scientist/decision maker, and increase the ability to rationalize complexity. Research progress can even be achieved before full understanding of complexity has been reached, by modeling and experimenting and providing more data to think about. Analytic engines provided via the Grid can help digest this data and make it tractable through visualization and exploration tools. We present a rationale for increasing research throughput by leveraging more complex model and data interaction.

  12. Learning to teach optics through experiments and demonstrations

    NASA Astrophysics Data System (ADS)

    Lancis, Jesús; Fernández-Alonso, Mercedes; Martínez-León, Lluis; Tajahuerce-Romera, Enrique; Mínguez-Vega, Gladis

    2014-07-01

    We have applied an active methodology to pre-service teacher training courses and to active teacher workshops on Optics. As a practical resource, a set of demonstrations has been used to learn how to perform classroom demonstrations. The set includes experiments about polarization and birefringence, optical information transmission, diffraction, fluorescence or scattering. It had been prepared for Science popularization activities and has been employed in several settings with a variety of audiences. In the teacher training sessions, simple but clarifying experiments have been performed by all the participants. Moreover, in these workshops, devices or basic set-ups, like the ones included in our demonstration set, have been built. The practical approach has allowed the enthusiastic sharing of teaching and learning experiences among the workshop participants. We believe that such an active orientation in teacher training courses promotes the active and collaborative teaching and learning of Optics in different levels of Education.

  13. Calculating orthologs in bacteria and Archaea: a divide and conquer approach.

    PubMed

    Halachev, Mihail R; Loman, Nicholas J; Pallen, Mark J

    2011-01-01

    Among proteins, orthologs are defined as those that are derived by vertical descent from a single progenitor in the last common ancestor of their host organisms. Our goal is to compute a complete set of protein orthologs derived from all currently available complete bacterial and archaeal genomes. Traditional approaches typically rely on all-against-all BLAST searching which is prohibitively expensive in terms of hardware requirements or computational time (requiring an estimated 18 months or more on a typical server). Here, we present xBASE-Orth, a system for ongoing ortholog annotation, which applies a "divide and conquer" approach and adopts a pragmatic scheme that trades accuracy for speed. Starting at species level, xBASE-Orth carefully constructs and uses pan-genomes as proxies for the full collections of coding sequences at each level as it progressively climbs the taxonomic tree using the previously computed data. This leads to a significant decrease in the number of alignments that need to be performed, which translates into faster computation, making ortholog computation possible on a global scale. Using xBASE-Orth, we analyzed an NCBI collection of 1,288 bacterial and 94 archaeal complete genomes with more than 4 million coding sequences in 5 weeks and predicted more than 700 million ortholog pairs, clustered in 175,531 orthologous groups. We have also identified sets of highly conserved bacterial and archaeal orthologs and in so doing have highlighted anomalies in genome annotation and in the proposed composition of the minimal bacterial genome. In summary, our approach allows for scalable and efficient computation of the bacterial and archaeal ortholog annotations. In addition, due to its hierarchical nature, it is suitable for incorporating novel complete genomes and alternative genome annotations. The computed ortholog data and a continuously evolving set of applications based on it are integrated in the xBASE database, available at http://www.xbase.ac.uk/.

  14. Automatic identification of resting state networks: an extended version of multiple template-matching

    NASA Astrophysics Data System (ADS)

    Guaje, Javier; Molina, Juan; Rudas, Jorge; Demertzi, Athena; Heine, Lizette; Tshibanda, Luaba; Soddu, Andrea; Laureys, Steven; Gómez, Francisco

    2015-12-01

    Functional magnetic resonance imaging in resting state (fMRI-RS) constitutes an informative protocol to investigate several pathological and pharmacological conditions. A common approach to study this data source is through the analysis of changes in the so called resting state networks (RSNs). These networks correspond to well-defined functional entities that have been associated to different low and high brain order functions. RSNs may be characterized by using Independent Component Analysis (ICA). ICA provides a decomposition of the fMRI-RS signal into sources of brain activity, but it lacks of information about the nature of the signal, i.e., if the source is artifactual or not. Recently, a multiple template-matching (MTM) approach was proposed to automatically recognize RSNs in a set of Independent Components (ICs). This method provides valuable information to assess subjects at individual level. Nevertheless, it lacks of a mechanism to quantify how much certainty there is about the existence/absence of each network. This information may be important for the assessment of patients with severely damaged brains, in which RSNs may be greatly affected as a result of the pathological condition. In this work we propose a set of changes to the original MTM that improves the RSNs recognition task and also extends the functionality of the method. The key points of this improvement is a standardization strategy and a modification of method's constraints that adds flexibility to the approach. Additionally, we also introduce an analysis to the trustworthiness measurement of each RSN obtained by using template-matching approach. This analysis consists of a thresholding strategy applied over the computed Goodness-of-Fit (GOF) between the set of templates and the ICs. The proposed method was validated on 2 two independent studies (Baltimore, 23 healthy subjects and Liege, 27 healthy subjects) with different configurations of MTM. Results suggest that the method will provide complementary information for characterization of RSNs at individual level.

  15. Manual cleaning of hospital mattresses: an observational study comparing high- and low-resource settings.

    PubMed

    Hopman, J; Hakizimana, B; Meintjes, W A J; Nillessen, M; de Both, E; Voss, A; Mehtar, S

    2016-01-01

    Hospital-associated infections (HAIs) are more frequently encountered in low- than in high-resource settings. There is a need to identify and implement feasible and sustainable approaches to strengthen HAI prevention in low-resource settings. To evaluate the biological contamination of routinely cleaned mattresses in both high- and low-resource settings. In this two-stage observational study, routine manual bed cleaning was evaluated at two university hospitals using adenosine triphosphate (ATP). Standardized training of cleaning personnel was achieved in both high- and low-resource settings. Qualitative analysis of the cleaning process was performed to identify predictors of cleaning outcome in low-resource settings. Mattresses in low-resource settings were highly contaminated prior to cleaning. Cleaning significantly reduced biological contamination of mattresses in low-resource settings (P < 0.0001). After training, the contamination observed after cleaning in both the high- and low-resource settings seemed comparable. Cleaning with appropriate type of cleaning materials reduced the contamination of mattresses adequately. Predictors for mattresses that remained contaminated in a low-resource setting included: type of product used, type of ward, training, and the level of contamination prior to cleaning. In low-resource settings mattresses were highly contaminated as noted by ATP levels. Routine manual cleaning by trained staff can be as effective in a low-resource setting as in a high-resource setting. We recommend a multi-modal cleaning strategy that consists of training of domestic services staff, availability of adequate time to clean beds between patients, and application of the correct type of cleaning products. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  16. Using Registered Dental Hygienists to Promote a School-Based Approach to Dental Public Health

    PubMed Central

    Wellever, Anthony; Kelly, Patricia

    2017-01-01

    We examine a strategy for improving oral health in the United States by focusing on low-income children in school-based settings. Vulnerable children often experience cultural, social, economic, structural, and geographic barriers when trying to access dental services in traditional dental office settings. These disparities have been discussed for more than a decade in multiple US Department of Health and Human Services publications. One solution is to revise dental practice acts to allow registered dental hygienists increased scope of services, expanded public health delivery opportunities, and decreased dentist supervision. We provide examples of how federally qualified health centers have implemented successful school-based dental models within the parameters of two state policies that allow registered dental hygienists varying levels of dentist supervision. Changes to dental practice acts at the state level allowing registered dental hygienists to practice with limited supervision in community settings, such as schools, may provide vulnerable populations greater access to screening and preventive services. We derive our recommendations from expert opinion. PMID:28661808

  17. DOE Waste Treatability Group Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkpatrick, T.D.

    1995-01-01

    This guidance presents a method and definitions for aggregating U.S. Department of Energy (DOE) waste into streams and treatability groups based on characteristic parameters that influence waste management technology needs. Adaptable to all DOE waste types (i.e., radioactive waste, hazardous waste, mixed waste, sanitary waste), the guidance establishes categories and definitions that reflect variations within the radiological, matrix (e.g., bulk physical/chemical form), and regulated contaminant characteristics of DOE waste. Beginning at the waste container level, the guidance presents a logical approach to implementing the characteristic parameter categories as part of the basis for defining waste streams and as the solemore » basis for assigning streams to treatability groups. Implementation of this guidance at each DOE site will facilitate the development of technically defined, site-specific waste stream data sets to support waste management planning and reporting activities. Consistent implementation at all of the sites will enable aggregation of the site-specific waste stream data sets into comparable national data sets to support these activities at a DOE complex-wide level.« less

  18. Aligning corporate greenhouse-gas emissions targets with climate goals

    NASA Astrophysics Data System (ADS)

    Krabbe, Oskar; Linthorst, Giel; Blok, Kornelis; Crijns-Graus, Wina; van Vuuren, Detlef P.; Höhne, Niklas; Faria, Pedro; Aden, Nate; Pineda, Alberto Carrillo

    2015-12-01

    Corporate climate action is increasingly considered important in driving the transition towards a low-carbon economy. For this, it is critical to ensure translation of global goals to greenhouse-gas (GHG) emissions reduction targets at company level. At the moment, however, there is a lack of clear methods to derive consistent corporate target setting that keeps cumulative corporate GHG emissions within a specific carbon budget (for example, 550-1,300 GtCO2 between 2011 and 2050 for the 2 °C target). Here we propose a method for corporate emissions target setting that derives carbon intensity pathways for companies based on sectoral pathways from existing mitigation scenarios: the Sectoral Decarbonization Approach (SDA). These company targets take activity growth and initial performance into account. Next to target setting on company level, the SDA can be used by companies, policymakers, investors or other stakeholders as a benchmark for tracking corporate climate performance and actions, providing a mechanism for corporate accountability.

  19. Office management of gait disorders in the elderly

    PubMed Central

    Lam, Robert

    2011-01-01

    Abstract Objective To provide family physicians with an approach to office management of gait disorders in the elderly. Sources of information Ovid MEDLINE was searched from 1950 to July 2010 using subject headings for gait or neurologic gait disorders combined with physical examination. Articles specific to family practice or family physicians were selected. Relevant review articles and original research were used when appropriate and applicable to the elderly. Main message Gait and balance disorders in the elderly are difficult to recognize and diagnose in the family practice setting because they initially present with subtle undifferentiated manifestations, and because causes are usually multifactorial, with multiple diseases developing simultaneously. To further complicate the issue, these manifestations can be camouflaged in elderly patients by the physiologic changes associated with normal aging. A classification of gait disorders based on sensorimotor levels can be useful in the approach to management of this problem. Gait disorders in patients presenting to family physicians in the primary care setting are often related to joint and skeletal problems (lowest-level disturbances), as opposed to patients referred to neurology specialty clinics with sensory ataxia, myelopathy, multiple strokes, and parkinsonism (lowest-, middle-, and highest-level disturbances). The difficulty in diagnosing gait disorders stems from the challenge of addressing early undifferentiated disease caused by multiple disease processes involving all sensorimotor levels. Patients might present with a nonspecific “cautious” gait that is simply an adaptation of the body to disease limitations. This cautious gait has a mildly flexed posture with reduced arm swing and a broadening of the base of support. This article reviews the focused history (including medication review), practical physical examination, investigations, and treatments that are key to office management of gait disorders. Conclusion Family physicians will find it helpful to classify gait disorders based on sensorimotor level as part of their approach to office management of elderly patients. Managing gait disorders at early stages can help prevent further deconditioning and mobility impairment. PMID:21753097

  20. Capacity of English NHS hospitals to monitor quality in infection prevention and control using a new European framework: a multilevel qualitative analysis

    PubMed Central

    Iwami, Michiyo; Ahmad, Raheelah; Castro-Sánchez, Enrique; Birgand, Gabriel; Johnson, Alan P; Holmes, Alison

    2017-01-01

    Objective (1) To assess the extent to which current English national regulations/policies/guidelines and local hospital practices align with indicators suggested by a European review of effective strategies for infection prevention and control (IPC); (2) to examine the capacity of local hospitals to report on the indicators and current use of data to inform IPC management and practice. Design A national and local-level analysis of the 27 indicators was conducted. At the national level, documentary review of regulations/policies/guidelines was conducted. At the local level data collection comprised: (a) review of documentary sources from 14 hospitals, to determine the capacity to report performance against these indicators; (b) qualitative interviews with 3 senior managers from 5 hospitals and direct observation of hospital wards to find out if these indicators are used to improve IPC management and practice. Setting 2 acute English National Health Service (NHS) trusts and 1 NHS foundation trust (14 hospitals). Participants 3 senior managers from 5 hospitals for qualitative interviews. Primary and secondary outcome measures As primary outcome measures, a ‘Red-Amber-Green’ (RAG) rating was developed reflecting how well the indicators were included in national documents or their availability at the local organisational level. The current use of the indicators to inform IPC management and practice was also assessed. The main secondary outcome measure is any inconsistency between national and local RAG rating results. Results National regulations/policies/guidelines largely cover the suggested European indicators. The ability of individual hospitals to report some of the indicators at ward level varies across staff groups, which may mask required improvements. A reactive use of staffing-related indicators was observed rather than the suggested prospective strategic approach for IPC management. Conclusions For effective patient safety and infection prevention in English hospitals, routine and proactive approaches need to be developed. Our approach to evaluation can be extended to other country settings. PMID:28115331

Top