Sample records for commonly applied methods

  1. Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating

    ERIC Educational Resources Information Center

    He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei

    2013-01-01

    Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…

  2. Fitting Residual Error Structures for Growth Models in SAS PROC MCMC

    ERIC Educational Resources Information Center

    McNeish, Daniel

    2017-01-01

    In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…

  3. Solid Phase Extraction (SPE) for Biodiesel Processing and Analysis

    DTIC Science & Technology

    2017-12-13

    1 METHODS ...sources. There are several methods than can be applied to development of separation techniques that may replace necessary water wash steps in...biodiesel refinement. Unfortunately, the most common methods are poorly suited or face high costs when applied to diesel purification. Distillation is

  4. Generalised Pareto distribution: impact of rounding on parameter estimation

    NASA Astrophysics Data System (ADS)

    Pasarić, Z.; Cindrić, K.

    2018-05-01

    Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.

  5. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  6. A blind search for a common signal in gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Creswell, James; von Hausegger, Sebastian; Jackson, Andrew D.; Naselsky, Pavel

    2018-02-01

    We propose a blind, template-free method for the extraction of a common signal between the Hanford and Livingston detectors and apply it especially to the GW150914 event. We construct a log-likelihood method that maximizes the cross-correlation between each detector and the common signal and minimizes the cross-correlation between the residuals. The reliability of this method is tested using simulations with an injected common signal. Finally, our method is used to assess the quality of theoretical gravitational wave templates for GW150914.

  7. Estimating and Interpreting Latent Variable Interactions: A Tutorial for Applying the Latent Moderated Structural Equations Method

    ERIC Educational Resources Information Center

    Maslowsky, Julie; Jager, Justin; Hemken, Douglas

    2015-01-01

    Latent variables are common in psychological research. Research questions involving the interaction of two variables are likewise quite common. Methods for estimating and interpreting interactions between latent variables within a structural equation modeling framework have recently become available. The latent moderated structural equations (LMS)…

  8. Methodological flaws introduce strong bias into molecular analysis of microbial populations.

    PubMed

    Krakat, N; Anjum, R; Demirel, B; Schröder, P

    2017-02-01

    In this study, we report how different cell disruption methods, PCR primers and in silico analyses can seriously bias results from microbial population studies, with consequences for the credibility and reproducibility of the findings. Our results emphasize the pitfalls of commonly used experimental methods that can seriously weaken the interpretation of results. Four different cell lysis methods, three commonly used primer pairs and various computer-based analyses were applied to investigate the microbial diversity of a fermentation sample composed of chicken dung. The fault-prone, but still frequently used, amplified rRNA gene restriction analysis was chosen to identify common weaknesses. In contrast to other studies, we focused on the complete analytical process, from cell disruption to in silico analysis, and identified potential error rates. This identified a wide disagreement of results between applied experimental approaches leading to very different community structures depending on the chosen approach. The interpretation of microbial diversity data remains a challenge. In order to accurately investigate the taxonomic diversity and structure of prokaryotic communities, we suggest a multi-level approach combining DNA-based and DNA-independent techniques. The identified weaknesses of commonly used methods to study microbial diversity can be overcome by a multi-level approach, which produces more reliable data about the fate and behaviour of microbial communities of engineered habitats such as biogas plants, so that the best performance can be ensured. © 2016 The Society for Applied Microbiology.

  9. Identifying hidden common causes from bivariate time series: a method using recurrence plots.

    PubMed

    Hirata, Yoshito; Aihara, Kazuyuki

    2010-01-01

    We propose a method for inferring the existence of hidden common causes from observations of bivariate time series. We detect related time series by excessive simultaneous recurrences in the corresponding recurrence plots. We also use a noncoverage property of a recurrence plot by the other to deny the existence of a directional coupling. We apply the proposed method to real wind data.

  10. A network approach for identifying and delimiting biogeographical regions.

    PubMed

    Vilhena, Daril A; Antonelli, Alexandre

    2015-04-24

    Biogeographical regions (geographically distinct assemblages of species and communities) constitute a cornerstone for ecology, biogeography, evolution and conservation biology. Species turnover measures are often used to quantify spatial biodiversity patterns, but algorithms based on similarity can be sensitive to common sampling biases in species distribution data. Here we apply a community detection approach from network theory that incorporates complex, higher-order presence-absence patterns. We demonstrate the performance of the method by applying it to all amphibian species in the world (c. 6,100 species), all vascular plant species of the USA (c. 17,600) and a hypothetical data set containing a zone of biotic transition. In comparison with current methods, our approach tackles the challenges posed by transition zones and succeeds in retrieving a larger number of commonly recognized biogeographical regions. This method can be applied to generate objective, data-derived identification and delimitation of the world's biogeographical regions.

  11. Electronic-projecting Moire method applying CBR-technology

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  12. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  13. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  14. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  15. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhaoyuan Liu; Kord Smith; Benoit Forget

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less

  16. Performance Characterization of an Instrument.

    ERIC Educational Resources Information Center

    Salin, Eric D.

    1984-01-01

    Describes an experiment designed to teach students to apply the same statistical awareness to instrumentation they commonly apply to classical techniques. Uses propagation of error techniques to pinpoint instrumental limitations and breakdowns and to demonstrate capabilities and limitations of volumetric and gravimetric methods. Provides lists of…

  17. Common cause evaluations in applied risk analysis of nuclear power plants. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, T.; Ligon, D.; Stamatelatos, M.

    1983-04-01

    Qualitative and quantitative approaches were developed for the evaluation of common cause failures (CCFs) in nuclear power plants and were applied to the analysis of the auxiliary feedwater systems of several pressurized water reactors (PWRs). Key CCF variables were identified through a survey of experts in the field and a review of failure experience in operating PWRs. These variables were classified into categories of high, medium, and low defense against a CCF. Based on the results, a checklist was developed for analyzing CCFs of systems. Several known techniques for quantifying CCFs were also reviewed. The information provided valuable insights inmore » the development of a new model for estimating CCF probabilities, which is an extension of and improvement over the Beta Factor method. As applied to the analysis of the PWR auxiliary feedwater systems, the method yielded much more realistic values than the original Beta Factor method for a one-out-of-three system.« less

  18. DISCO-SCA and Properly Applied GSVD as Swinging Methods to Find Common and Distinctive Processes

    PubMed Central

    Van Deun, Katrijn; Van Mechelen, Iven; Thorrez, Lieven; Schouteden, Martijn; De Moor, Bart; van der Werf, Mariët J.; De Lathauwer, Lieven; Smilde, Age K.; Kiers, Henk A. L.

    2012-01-01

    Background In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration methods have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). Results Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by applying Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. Conclusions Both DISCO-SCA and properly applied GSVD are promising integrative methods for finding common and distinctive processes in multisource data. Open source code for both methods is provided. PMID:22693578

  19. A Teacher's Guide to Memory Techniques.

    ERIC Educational Resources Information Center

    Hodges, Daniel L.

    1982-01-01

    To aid instructors in teaching their students to use effective methods of memorization, this article outlines major memory methods, provides examples of their use, evaluates the methods, and discusses ways students can be taught to apply them. First, common, but less effective, memory methods are presented, including reading and re-reading…

  20. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines.

    PubMed

    Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea

    2017-07-18

    The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.

  1. A Comparison of Various MRA Methods Applied to Longitudinal Evaluation Studies in Vocational Education.

    ERIC Educational Resources Information Center

    Kapes, Jerome T.; And Others

    Three models of multiple regression analysis (MRA): single equation, commonality analysis, and path analysis, were applied to longitudinal data from the Pennsylvania Vocational Development Study. Variables influencing weekly income of vocational education students one year after high school graduation were examined: grade point averages (grades…

  2. Application of augmented reality for inferior alveolar nerve block anesthesia: A technical note

    PubMed Central

    2017-01-01

    Efforts to apply augmented reality (AR) technology in the medical field include the introduction of AR techniques into dental practice. The present report introduces a simple method of applying AR during an inferior alveolar nerve block, a procedure commonly performed in dental clinics. PMID:28879340

  3. Application of augmented reality for inferior alveolar nerve block anesthesia: A technical note.

    PubMed

    Won, Yu-Jin; Kang, Sang-Hoon

    2017-06-01

    Efforts to apply augmented reality (AR) technology in the medical field include the introduction of AR techniques into dental practice. The present report introduces a simple method of applying AR during an inferior alveolar nerve block, a procedure commonly performed in dental clinics.

  4. Non-invasive imaging methods applied to neo- and paleontological cephalopod research

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2013-11-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.

  5. Objective comparison of particle tracking methods.

    PubMed

    Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R; Godinez, William J; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P; Dan, Han-Wei; Tsai, Yuh-Show; Ortiz de Solórzano, Carlos; Olivo-Marin, Jean-Christophe; Meijering, Erik

    2014-03-01

    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.

  6. From picture to porosity of river bed material using Structure-from-Motion with Multi-View-Stereo

    NASA Astrophysics Data System (ADS)

    Seitz, Lydia; Haas, Christian; Noack, Markus; Wieprecht, Silke

    2018-04-01

    Common methods for in-situ determination of porosity of river bed material are time- and effort-consuming. Although mathematical predictors can be used for estimation, they do not adequately represent porosities. The objective of this study was to assess a new approach for the determination of porosity of frozen sediment samples. The method is based on volume determination by applying Structure-from-Motion with Multi View Stereo (SfM-MVS) to estimate a 3D volumetric model based on overlapping imagery. The method was applied on artificial sediment mixtures as well as field samples. In addition, the commonly used water replacement method was applied to determine porosities in comparison with the SfM-MVS method. We examined a range of porosities from 0.16 to 0.46 that are representative of the wide range of porosities found in rivers. SfM-MVS performed well in determining volumes of the sediment samples. A very good correlation (r = 0.998, p < 0.0001) was observed between the SfM-MVS and the water replacement method. Results further show that the water replacement method underestimated total sample volumes. A comparison with several mathematical predictors showed that for non-uniform samples the calculated porosity based on the standard deviation performed better than porosities based on the median grain size. None of the predictors were effective at estimating the porosity of the field samples.

  7. Genotyping the factor VIII intron 22 inversion locus using fluorescent in situ hybridization.

    PubMed

    Sheen, Campbell R; McDonald, Margaret A; George, Peter M; Smith, Mark P; Morris, Christine M

    2011-02-15

    The factor VIII intron 22 inversion is the most common cause of hemophilia A, accounting for approximately 40% of all severe cases of the disease. Southern hybridization and multiplex long distance PCR are the most commonly used techniques to detect the inversion in a diagnostic setting, although both have significant limitations. Here we describe our experience establishing a multicolor fluorescent in situ hybridization (FISH) based assay as an alternative to existing methods for genetic diagnosis of the inversion. Our assay was designed to apply three differentially labelled BAC DNA probes that when hybridized to interphase nuclei would exhibit signal patterns that are consistent with the normal or the inversion locus. When the FISH assay was applied to five normal and five inversion male samples, the correct genotype was assignable with p<0.001 for all samples. When applied to carrier female samples the assay could not assign a genotype to all female samples, probably due to a lower proportion of informative nuclei in female samples caused by the added complexity of a second X chromosome. Despite this complication, these pilot findings show that the assay performs favourably compared to the commonly used methods. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Degradation and adsorption of carbonated dimethyl disulfide in soils with grape production in california

    USDA-ARS?s Scientific Manuscript database

    The common method to apply pre-plant soil fumigants is through pressurizing the pesticides with compressed nitrogen gas. However, it is believed that fumigants with relatively low vapor pressure, such as dimethyl disulfide or DMDS, can be better dispersed in soil when applied using CO2 gas. A labor...

  9. A review of population data utilization in beef cattle research.

    PubMed

    Jones, R; Langemeier, M

    2010-04-01

    Controlled experimentation has been the most common source of research data in most biological sciences. However, many research questions lend themselves to the use of population data, or combinations of population data and data resulting from controlled experimentation. Studies of important economic outcomes, such as efficiency, profits, and costs, lend themselves particularly well to this type of analysis. Analytical methods that have been most commonly applied to population data in studies related to livestock production and management include statistical regression and mathematical programming. In social sciences, such as applied economics, it has become common to utilize more than one method in the same study to provide answers to the various questions at hand. Of course, care must be taken to ensure that the methods of analysis are appropriately applied; however, a wide variety of beef industry research questions are being addressed using population data. Issues related to data sources, aggregation levels, and consistency of collection often surface when using population data. These issues are addressed by careful consideration of the questions being addressed and the costs of data collection. Previous research across a variety of cattle production and marketing issues provides a broad foundation upon which to build future research. There is tremendous opportunity for increased use of population data and increased collaboration across disciplines to address issues of importance to the cattle industry.

  10. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    PubMed

    Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P

    2017-04-20

    The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the methods in social epidemiology will help to build an evidence base to support policy making in the area of health inequalities.

  11. Language Practitioners' Reflections on Method-Based and Post-Method Pedagogies

    ERIC Educational Resources Information Center

    Soomro, Abdul Fattah; Almalki, Mansoor S.

    2017-01-01

    Method-based pedagogies are commonly applied in teaching English as a foreign language all over the world. However, in the last quarter of the 20th century, the concept of such pedagogies based on the application of a single best method in EFL started to be viewed with concerns by some scholars. In response to the growing concern against the…

  12. Simultaneous Synthesis of Treatment Effects and Mapping to a Common Scale: An Alternative to Standardisation

    ERIC Educational Resources Information Center

    Ades, A. E.; Lu, Guobing; Dias, Sofia; Mayo-Wilson, Evan; Kounali, Daphne

    2015-01-01

    Objective: Trials often may report several similar outcomes measured on different test instruments. We explored a method for synthesising treatment effect information both within and between trials and for reporting treatment effects on a common scale as an alternative to standardisation Study design: We applied a procedure that simultaneously…

  13. Objective comparison of particle tracking methods

    PubMed Central

    Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F.; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R.; Godinez, William J.; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L.; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P.; Dan, Han-Wei; Tsai, Yuh-Show; de Solórzano, Carlos Ortiz; Olivo-Marin, Jean-Christophe; Meijering, Erik

    2014-01-01

    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Since manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized, for the first time, an open competition, in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to important practical conclusions for users and developers. PMID:24441936

  14. The Use of Object-Oriented Analysis Methods in Surety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.

    1999-05-01

    Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less

  15. EVALUATION OF IODINE BASED IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF HG USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...

  16. Midstory hardwood species respond differently to chainsaw girdle method and herbicide treatment

    Treesearch

    Ronald A. Rathfon; Michael R. Saunders

    2013-01-01

    Foresters in the Central Hardwoods Region commonly fell or girdle interfering trees and apply herbicide to the cut surface when performing intermediate silvicultural treatments. The objective of this study was to compare the use of single and double chainsaw girdle methods in combination with a herbicide treatment and, within the double girdle method, compare herbicide...

  17. Information Fusion - Methods and Aggregation Operators

    NASA Astrophysics Data System (ADS)

    Torra, Vicenç

    Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.

  18. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  19. Adapting and applying common methods used in pharmacovigilance to the environment: A possible starting point for the implementation  of eco-pharmacovigilance.

    PubMed

    Wang, Jun; Zhang, Mengya; Li, Shulan; He, Bingshu

    2018-07-01

    Now, the occurrence of pharmaceuticals in natural environment has been frequently reported around the world. As a kind of biologically active compounds specially designed to be effective even at very low concentration levels, pharmaceuticals in the environment could have adverse impacts to the health of human beings or other non-targeted organisms due to long-term exposures. To minimize the pharmaceutical pollution from the perspective of drug administration, a new concept called as eco-pharmacovigilance (EPV) has been proposed as a kind of pharmacovigilance(PV) for the environment. However, as a new and comprehensive science, EPV has not sophisticated methods in practice and formalized implementation model up to now. Since EPV is a special kind of PV, it could be feasible to draw on the experience of PV as a possible and reasonable starting point for EPV. In this paper, we discussed the common methods and activities used in PV including spontaneous reporting, intensive monitoring, database studies, and their potential applicability to the environment. And we concluded that these common methods in PV could be adapted and applied to EPV. But there is still the need for organizational, technical and financial supports of the EPV system. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. [Acoustic detection of absorption of millimeter-band electromagnetic waves in biological objects].

    PubMed

    Polnikov, I G; Putvinskiĭ, A V

    1988-01-01

    Principles of photoacoustic spectroscopy were applied to elaborate a new method for controlling millimeter electromagnetic waves absorption in biological objects. The method was used in investigations of frequency dependence of millimeter wave power absorption in vitro and in vivo in the commonly used experimental irradiation systems.

  1. NMR analysis of biodiesel

    USDA-ARS?s Scientific Manuscript database

    Biodiesel is usually analyzed by the various methods called for in standards such as ASTM D6751 and EN 14214. Nuclear magnetic resonance (NMR) is not one of these methods. However, NMR, with 1H-NMR commonly applied, can be useful in a variety of applications related to biodiesel. These include monit...

  2. Applying high-resolution melting (HRM) technology to identify five commonly used Artemisia species.

    PubMed

    Song, Ming; Li, Jingjian; Xiong, Chao; Liu, Hexia; Liang, Junsong

    2016-10-04

    Many members of the genus Artemisia are important for medicinal purposes with multiple pharmacological properties. Often, these herbal plants sold on the markets are in processed forms so it is difficult to authenticate. Routine testing and identification of these herbal materials should be performed to ensure that the raw materials used in pharmaceutical products are suitable for their intended use. In this study, five commonly used Artemisia species included Artemisia argyi, Artemisia annua, Artemisia lavandulaefolia, Artemisia indica, and Artemisia atrovirens were analyzed using high resolution melting (HRM) analysis based on the internal transcribed spacer 2 (ITS2) sequences. The melting profiles of the ITS2 amplicons of the five closely related herbal species are clearly separated so that they can be differentiated by HRM method. The method was further applied to authenticate commercial products in powdered. HRM curves of all the commercial samples tested are similar to the botanical species as labeled. These congeneric medicinal products were also clearly separated using the neighbor-joining (NJ) tree. Therefore, HRM method could provide an efficient and reliable authentication system to distinguish these commonly used Artemisia herbal products on the markets and offer a technical reference for medicines quality control in the drug supply chain.

  3. Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2014-05-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.

  4. Chemical Differentiation of Dendrobium officinale and Dendrobium devonianum by Using HPLC Fingerprints, HPLC-ESI-MS, and HPTLC Analyses

    PubMed Central

    Ye, Zi; Dai, Jia-Rong; Zhang, Cheng-Gang; Lu, Ye; Wu, Lei-Lei; Gong, Amy G. W.; Wang, Zheng-Tao

    2017-01-01

    The stems of Dendrobium officinale Kimura et Migo (Dendrobii Officinalis Caulis) have a high medicinal value as a traditional Chinese medicine (TCM). Because of the limited supply, D. officinale is a high priced TCM, and therefore adulterants are commonly found in the herbal market. The dried stems of a closely related Dendrobium species, Dendrobium devonianum Paxt., are commonly used as the substitute; however, there is no effective method to distinguish the two Dendrobium species. Here, a high performance liquid chromatography (HPLC) method was successfully developed and applied to differentiate D. officinale and D. devonianum by comparing the chromatograms according to the characteristic peaks. A HPLC coupled with electrospray ionization multistage mass spectrometry (HPLC-ESI-MS) method was further applied for structural elucidation of 15 flavonoids, 5 phenolic acids, and 1 lignan in D. officinale. Among these flavonoids, 4 flavonoid C-glycosides were firstly reported in D. officinale, and violanthin and isoviolanthin were identified to be specific for D. officinale compared with D. devonianum. Then, two representative components were used as chemical markers. A rapid and reliable high performance thin layer chromatography (HPTLC) method was applied in distinguishing D. officinale from D. devonianum. The results of this work have demonstrated that these developed analytical methods can be used to discriminate D. officinale and D. devonianum effectively and conveniently. PMID:28769988

  5. A Comparison of Methods of Vertical Equating.

    ERIC Educational Resources Information Center

    Loyd, Brenda H.; Hoover, H. D.

    Rasch model vertical equating procedures were applied to three mathematics computation tests for grades six, seven, and eight. Each level of the test was composed of 45 items in three sets of 15 items, arranged in such a way that tests for adjacent grades had two sets (30 items) in common, and the sixth and eighth grades had 15 items in common. In…

  6. A Simple and Useful Method to Apply Exogenous NO Gas to Plant Systems: Bell Pepper Fruits as a Model.

    PubMed

    Palma, José M; Ruiz, Carmelo; Corpas, Francisco J

    2018-01-01

    Nitric oxide (NO) is involved many physiological plant processes, including germination, growth and development of roots, flower setting and development, senescence, and fruit ripening. In the latter physiological process, NO has been reported to play an opposite role to ethylene. Thus, treatment of fruits with NO may lead to delay ripening independently of whether they are climacteric or nonclimacteric. In many cases different methods have been reported to apply NO to plant systems involving sodium nitroprusside, NONOates, DETANO, or GSNO to investigate physiological and molecular consequences. In this chapter a method to treat plant materials with NO is provided using bell pepper fruits as a model. This method is cheap, free of side effects, and easy to apply since it only requires common chemicals and tools available in any biology laboratory.

  7. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  8. The Effect of Schooling and Ability on Achievement Test Scores. NBER Working Paper Series.

    ERIC Educational Resources Information Center

    Hansen, Karsten; Heckman, James J.; Mullen, Kathleen J.

    This study developed two methods for estimating the effect of schooling on achievement test scores that control for the endogeneity of schooling by postulating that both schooling and test scores are generated by a common unobserved latent ability. The methods were applied to data on schooling and test scores. Estimates from the two methods are in…

  9. Non-Adiabatic Molecular Dynamics Methods for Materials Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Parker, Shane M.; Muuronen, Mikko J.

    2017-04-04

    The flow of radiative energy in light-driven materials such as photosensitizer dyes or photocatalysts is governed by non-adiabatic transitions between electronic states and cannot be described within the Born-Oppenheimer approximation commonly used in electronic structure theory. The non-adiabatic molecular dynamics (NAMD) methods based on Tully surface hopping and time-dependent density functional theory developed in this project have greatly extended the range of molecular materials that can be tackled by NAMD simulations. New algorithms to compute molecular excited state and response properties efficiently were developed. Fundamental limitations of common non-linear response methods were discovered and characterized. Methods for accurate computations ofmore » vibronic spectra of materials such as black absorbers were developed and applied. It was shown that open-shell TDDFT methods capture bond breaking in NAMD simulations, a longstanding challenge for single-reference molecular dynamics simulations. The methods developed in this project were applied to study the photodissociation of acetaldehyde and revealed that non-adiabatic effects are experimentally observable in fragment kinetic energy distributions. Finally, the project enabled the first detailed NAMD simulations of photocatalytic water oxidation by titania nanoclusters, uncovering the mechanism of this fundamentally important reaction for fuel generation and storage.« less

  10. Process safety improvement--quality and target zero.

    PubMed

    Van Scyoc, Karl

    2008-11-15

    Process safety practitioners have adopted quality management principles in design of process safety management systems with positive effect, yet achieving safety objectives sometimes remain a distant target. Companies regularly apply tools and methods which have roots in quality and productivity improvement. The "plan, do, check, act" improvement loop, statistical analysis of incidents (non-conformities), and performance trending popularized by Dr. Deming are now commonly used in the context of process safety. Significant advancements in HSE performance are reported after applying methods viewed as fundamental for quality management. In pursuit of continual process safety improvement, the paper examines various quality improvement methods, and explores how methods intended for product quality can be additionally applied to continual improvement of process safety. Methods such as Kaizen, Poke yoke, and TRIZ, while long established for quality improvement, are quite unfamiliar in the process safety arena. These methods are discussed for application in improving both process safety leadership and field work team performance. Practical ways to advance process safety, based on the methods, are given.

  11. MDD diagnosis based on partial-brain functional connection network

    NASA Astrophysics Data System (ADS)

    Yan, Gaoliang; Hu, Hailong; Zhao, Xiang; Zhang, Lin; Qu, Zehui; Li, Yantao

    2018-04-01

    Artificial intelligence (AI) is a hotspot in computer science research nowadays. To apply AI technology in all industries has been the developing direction for researchers. Major depressive disorder (MDD) is a common disease of serious mental disorders. The World Health Organization (WHO) reports that MDD is projected to become the second most common cause of death and disability by 2020. At present, the way of MDD diagnosis is single. Applying AI technology to MDD diagnosis and pathophysiological research will speed up the MDD research and improve the efficiency of MDD diagnosis. In this study, we select the higher degree of brain network functional connectivity by statistical methods. And our experiments show that the average accuracy of Logistic Regression (LR) classifier using feature filtering reaches 88.48%. Compared with other classification methods, both the efficiency and accuracy of this method are improved, which will greatly improve the process of MDD diagnose. In these experiments, we also define the brain regions associated with MDD, which plays a vital role in MDD pathophysiological research.

  12. Measuring missing heritability: Inferring the contribution of common variants

    PubMed Central

    Golan, David; Lander, Eric S.; Rosset, Saharon

    2014-01-01

    Genome-wide association studies (GWASs), also called common variant association studies (CVASs), have uncovered thousands of genetic variants associated with hundreds of diseases. However, the variants that reach statistical significance typically explain only a small fraction of the heritability. One explanation for the “missing heritability” is that there are many additional disease-associated common variants whose effects are too small to detect with current sample sizes. It therefore is useful to have methods to quantify the heritability due to common variation, without having to identify all causal variants. Recent studies applied restricted maximum likelihood (REML) estimation to case–control studies for diseases. Here, we show that REML considerably underestimates the fraction of heritability due to common variation in this setting. The degree of underestimation increases with the rarity of disease, the heritability of the disease, and the size of the sample. Instead, we develop a general framework for heritability estimation, called phenotype correlation–genotype correlation (PCGC) regression, which generalizes the well-known Haseman–Elston regression method. We show that PCGC regression yields unbiased estimates. Applying PCGC regression to six diseases, we estimate the proportion of the phenotypic variance due to common variants to range from 25% to 56% and the proportion of heritability due to common variants from 41% to 68% (mean 60%). These results suggest that common variants may explain at least half the heritability for many diseases. PCGC regression also is readily applicable to other settings, including analyzing extreme-phenotype studies and adjusting for covariates such as sex, age, and population structure. PMID:25422463

  13. Constructing probability boxes and Dempster-Shafer structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Kreinovich, Vladik; Grinzburg, Lev

    This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources.

  14. VALIDATION OF MICROSATELLITE MARKERS FOR USE IN GENOTYPING POLYCLONAL PLASMODIUM FALCIPARUM INFECTIONS

    PubMed Central

    GREENHOUSE, BRYAN; MYRICK, ALISSA; DOKOMAJILAR, CHRISTIAN; WOO, JONATHAN M.; CARLSON, ELAINE J.; ROSENTHAL, PHILIP J.; DORSEY, GRANT

    2006-01-01

    Genotyping methods for Plasmodium falciparum drug efficacy trials have not been standardized and may fail to accurately distinguish recrudescence from new infection, especially in high transmission areas where polyclonal infections are common. We developed a simple method for genotyping using previously identified microsatellites and capillary electrophoresis, validated this method using mixtures of laboratory clones, and applied the method to field samples. Two microsatellite markers produced accurate results for single-clone but not polyclonal samples. Four other microsatellite markers were as sensitive as, and more specific than, commonly used genotyping techniques based on merozoite surface proteins 1 and 2. When applied to samples from 15 patients in Burkina Faso with recurrent parasitemia after treatment with sulphadoxine-pyrimethamine, the addition of these four microsatellite markers to msp1 and msp2 genotyping resulted in a reclassification of outcomes that strengthened the association between dhfr 59R, an anti-folate resistance mutation, and recrudescence (P = 0.31 versus P = 0.03). Four microsatellite markers performed well on polyclonal samples and may provide a valuable addition to genotyping for clinical drug efficacy studies in high transmission areas. PMID:17123974

  15. Force Enhancement Packages for Countering Nuclear Threats in the 2022-2027 Time Frame

    DTIC Science & Technology

    2015-09-01

    characterization methods . • Apply proper radioisotope identification techniques. c. A one-week CNT operations exercise at Fort Belvoir, Virginia. Team members...on experiments to seek better methods , holding active teaching until later. The team expects that better methods would involve collection using...conduct more effective wide-area searches than those commonly employed by civil law enforcement agencies. The IDA team suggests that better methods

  16. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    PubMed

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  17. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  18. REGIONAL RESEARCH, METHODS, AND SUPPORT

    EPA Science Inventory

    The Human Exposure and Atmospheric Sciences Division (HEASD) has several collaborations with regional partners through the Regional Science Program (RSP) managed by ORD's Office of Science Policy (OSP). These projects resulted from common interests outlined in the Regional Appli...

  19. Medulloblastoma | Office of Cancer Genomics

    Cancer.gov

    The Medulloblastoma Project was developed to apply newly emerging genomic methods towards the discovery of novel genetic alterations in medulloblastoma (MB). MB is the most common malignant brain tumor in children, accounting for approximately 20% of all pediatric brain tumors.

  20. Detection of genetically modified organisms (GMOs) using isothermal amplification of target DNA sequences.

    PubMed

    Lee, David; La Mura, Maurizio; Allnutt, Theo R; Powell, Wayne

    2009-02-02

    The most common method of GMO detection is based upon the amplification of GMO-specific DNA amplicons using the polymerase chain reaction (PCR). Here we have applied the loop-mediated isothermal amplification (LAMP) method to amplify GMO-related DNA sequences, 'internal' commonly-used motifs for controlling transgene expression and event-specific (plant-transgene) junctions. We have tested the specificity and sensitivity of the technique for use in GMO studies. Results show that detection of 0.01% GMO in equivalent background DNA was possible and dilutions of template suggest that detection from single copies of the template may be possible using LAMP. This work shows that GMO detection can be carried out using LAMP for routine screening as well as for specific events detection. Moreover, the sensitivity and ability to amplify targets, even with a high background of DNA, here demonstrated, highlights the advantages of this isothermal amplification when applied for GMO detection.

  1. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  2. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  3. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  4. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE PAGES

    Shippert, Tim; Gaustad, Krista

    2016-12-16

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  5. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    PubMed

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not highlight the most influencing variables for each separation, whereas the ICA Loadings highlighted the same variables as did CCA. This study shows the potential of CCA for the extraction of pertinent information from a data matrix, using a procedure based on an original optimisation criterion, to produce results that are complementary, and in some cases may be superior, to those of PCA and ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Prospective performance evaluation of selected common virtual screening tools. Case study: Cyclooxygenase (COX) 1 and 2.

    PubMed

    Kaserer, Teresa; Temml, Veronika; Kutil, Zsofia; Vanek, Tomas; Landa, Premysl; Schuster, Daniela

    2015-01-01

    Computational methods can be applied in drug development for the identification of novel lead candidates, but also for the prediction of pharmacokinetic properties and potential adverse effects, thereby aiding to prioritize and identify the most promising compounds. In principle, several techniques are available for this purpose, however, which one is the most suitable for a specific research objective still requires further investigation. Within this study, the performance of several programs, representing common virtual screening methods, was compared in a prospective manner. First, we selected top-ranked virtual screening hits from the three methods pharmacophore modeling, shape-based modeling, and docking. For comparison, these hits were then additionally predicted by external pharmacophore- and 2D similarity-based bioactivity profiling tools. Subsequently, the biological activities of the selected hits were assessed in vitro, which allowed for evaluating and comparing the prospective performance of the applied tools. Although all methods performed well, considerable differences were observed concerning hit rates, true positive and true negative hits, and hitlist composition. Our results suggest that a rational selection of the applied method represents a powerful strategy to maximize the success of a research project, tightly linked to its aims. We employed cyclooxygenase as application example, however, the focus of this study lied on highlighting the differences in the virtual screening tool performances and not in the identification of novel COX-inhibitors. Copyright © 2015 The Authors. Published by Elsevier Masson SAS.. All rights reserved.

  7. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  8. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  9. A comparison of automated crater detection methods

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Barreira, C.; Pina, P.; Saraiva, J.

    2008-09-01

    Abstract This work presents early results of a comparison between some common methodologies for automated crater detection. The three procedures considered were applied to images of the surface of Mars, thus illustrating some pros and cons of their use. We aim to establish the clear advantages in using this type of methods in the study of planetary surfaces.

  10. An analysis of methods for the selection of trees from wild stands

    Treesearch

    F. Thomas Ledig

    1976-01-01

    The commonly applied comparison-tree method of selection is analyzed as a form of within-family selection. If environmental variarion among comparison- and select-tree groups, c2, is a relatively small proportion (17 percent or less with 5 comparison trees) of the total variation, comparison-tree selection will result in less...

  11. Freeze-thaw method improves the detection of volatile compounds in insects using Headspace Solid-Phase Microextraction (HS-SPME)

    USDA-ARS?s Scientific Manuscript database

    Headspace solid-phase microextraction (HS-SPME) coupled with gas chromatography–mass spectrometry (GC-MS) is commonly used in analyzing insect volatiles. In order to improve the detection of volatiles in insects, a freeze-thaw method was applied to insect samples before the HS-SPME-GC-MS analysis. ...

  12. IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF GASEOUS MERCURY SPECIES USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling mehtods that have been promulgated for measuring Hg from coal combustion sources. EPA Method 29 is most commonly applied. The ASTM Ontario Hydro Draft Method for measuring oxidized, elemental, particulate-bound and total Hg is now und...

  13. A Model for Minimizing Numeric Function Generator Complexity and Delay

    DTIC Science & Technology

    2007-12-01

    allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods

  14. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    PubMed Central

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  16. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  17. [Application of case-based method in genetics and eugenics teaching].

    PubMed

    Li, Ya-Xuan; Zhao, Xin; Zhang, Fei-Xiong; Hu, Ying-Kao; Yan, Yue-Ming; Cai, Min-Hua; Li, Xiao-Hui

    2012-05-01

    Genetics and Eugenics is a cross-discipline between genetics and eugenics. It is a common curriculum in many Chinese universities. In order to increase the learning interest, we introduced case teaching method and got a better teaching effect. Based on our teaching practices, we summarized some experiences about this subject. In this article, the main problem of case-based method applied in Genetics and Eugenics teaching was discussed.

  18. Adult Learning Principles and Presentation Pearls

    PubMed Central

    Palis, Ana G.; Quiros, Peter A.

    2014-01-01

    Although lectures are one of the most common methods of knowledge transfer in medicine, their effectiveness has been questioned. Passive formats, lack of relevance and disconnection from the student's needs are some of the arguments supporting this apparent lack of efficacy. However, many authors have suggested that applying adult learning principles (i.e., relevance, congruence with student's needs, interactivity, connection to student's previous knowledge and experience) to this method increases learning by lectures and the effectiveness of lectures. This paper presents recommendations for applying adult learning principles during planning, creation and development of lectures to make them more effective. PMID:24791101

  19. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  20. A Comprehensive Planning Model

    ERIC Educational Resources Information Center

    Temkin, Sanford

    1972-01-01

    Combines elements of the problem solving approach inherent in methods of applied economics and operations research and the structural-functional analysis common in social science modeling to develop an approach for economic planning and resource allocation for schools and other public sector organizations. (Author)

  1. 47 CFR 51.501 - Scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Pricing of Elements § 51.501 Scope. (a) The rules in this subpart apply to the pricing of network elements, interconnection, and methods of obtaining access to unbundled elements, including physical collocation and virtual...

  2. Convergence studies in meshfree peridynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleson, Pablo; Littlewood, David J.

    2016-04-15

    Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less

  3. Retrospective analysis of two hundred thirty-five pediatric mandibular fracture cases.

    PubMed

    Eskitascioglu, Teoman; Ozyazgan, Irfan; Coruh, Atilla; Gunay, Galip K; Yuksel, Esabil

    2009-11-01

    Maxillofacial fractures are encountered less commonly during childhood period due to anatomic, social, cultural, and environmental factors. Although the incidence of all maxillofacial fractures is 1% to 15% among pediatric and adolescent patients, this rate drops to less than 1% in children below 5 years age. Two hundred thirty-five cases (

  4. Fundamentals in Biostatistics for Investigation in Pediatric Dentistry: Part II -Biostatistical Methods.

    PubMed

    Pozos-Guillén, Amaury; Ruiz-Rodríguez, Socorro; Garrocho-Rangel, Arturo

    The main purpose of the second part of this series was to provide the reader with some basic aspects of the most common biostatistical methods employed in health sciences, in order to better understand the validity, significance and reliability of the results from any article on Pediatric Dentistry. Currently, as mentioned in the first paper, Pediatric Dentists need basic biostatistical knowledge to be able to apply it when critically appraise a dental article during the Evidence-based Dentistry (EBD) process, or when participating in the development of a clinical study with dental pediatric patients. The EBD process provides a systematic approach of collecting, review and analyze current and relevant published evidence about oral health care in order to answer a particular clinical question; then this evidence should be applied in everyday practice. This second report describes the most commonly used statistical methods for analyzing and interpret collected data, and the methodological criteria to be considered when choosing the most appropriate tests for a specific study. These are available to Pediatric Dentistry practicants interested in reading or designing original clinical or epidemiological studies.

  5. Merging visible-light photocatalysis and transition-metal catalysis in the copper-catalyzed trifluoromethylation of boronic acids with CF3I.

    PubMed

    Ye, Yingda; Sanford, Melanie S

    2012-06-06

    This communication describes the development of a mild method for the cross-coupling of arylboronic acids with CF(3)I via the merger of photoredox and Cu catalysis. This method has been applied to the trifluoromethylation of electronically diverse aromatic and heteroaromatic substrates and tolerates many common functional groups.

  6. Rapid method for controlling the correct labeling of products containing common octopus (Octopus vulgaris) and main substitute species (Eledone cirrhosa and Dosidicus gigas) by fast real-time PCR.

    PubMed

    Espiñeira, Montserrat; Vieites, Juan M

    2012-12-15

    The TaqMan real-time PCR has the highest potential for automation, therefore representing the currently most suitable method for screening, allowing the detection of fraudulent or unintentional mislabeling of species. This work describes the development of a real-time polymerase chain reaction (RT-PCR) system for the detection and identification of common octopus (Octopus vulgaris) and main substitute species (Eledone cirrhosa and Dosidicus gigas). This technique is notable for the combination of simplicity, speed, sensitivity and specificity in an homogeneous assay. The method can be applied to all kinds of products; fresh, frozen and processed, including those undergoing intensive processes of transformation. This methodology was validated to check how the degree of food processing affects the method and the detection of each species. Moreover, it was applied to 34 commercial samples to evaluate the labeling of products made from them. The methodology herein developed is useful to check the fulfillment of labeling regulations for seafood products and to verify traceability in commercial trade and for fisheries control. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  8. Thermal protection of β-carotene in re-assembled casein micelles during different processing technologies applied in food industry.

    PubMed

    Sáiz-Abajo, María-José; González-Ferrero, Carolina; Moreno-Ruiz, Ana; Romo-Hualde, Ana; González-Navarro, Carlos J

    2013-06-01

    β-Carotene is a carotenoid usually applied in the food industry as a precursor of vitamin A or as a colourant. β-Carotene is a labile compound easily degraded by light, heat and oxygen. Casein micelles were used as nanostructures to encapsulate, stabilise and protect β-carotene from degradation during processing in the food industry. Self-assembly method was applied to re-assemble nanomicelles containing β-carotene. The protective effect of the nanostructures against degradation during the most common industrial treatments (sterilisation, pasteurisation, high hydrostatic pressure and baking) was proven. Casein micelles protected β-carotene from degradation during heat stabilisation, high pressure processing and the processes most commonly used in the food industry including baking. This opens new possibilities for introducing thermolabile ingredients in bakery products. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Vehicle routing problem and capacitated vehicle routing problem frameworks in fund allocation problem

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2016-11-01

    Two new methods adopted from methods commonly used in the field of transportation and logistics are proposed to solve a specific issue of investment allocation problem. Vehicle routing problem and capacitated vehicle routing methods are applied to optimize the fund allocation of a portfolio of investment assets. This is done by determining the sequence of the assets. As a result, total investment risk is minimized by this sequence.

  10. Application of an innovative design space optimization strategy to the development of liquid chromatographic methods to combat potentially counterfeit nonsteroidal anti-inflammatory drugs.

    PubMed

    Mbinze, J K; Lebrun, P; Debrus, B; Dispas, A; Kalenda, N; Mavar Tayey Mbay, J; Schofield, T; Boulanger, B; Rozet, E; Hubert, Ph; Marini, R D

    2012-11-09

    In the context of the battle against counterfeit medicines, an innovative methodology has been used to develop rapid and specific high performance liquid chromatographic methods to detect and determine 18 non-steroidal anti-inflammatory drugs, 5 pharmaceutical conservatives, paracetamol, chlorzoxazone, caffeine and salicylic acid. These molecules are commonly encountered alone or in combination on the market. Regrettably, a significant proportion of these consumed medicines are counterfeit or substandard, with a strong negative impact in countries of Central Africa. In this context, an innovative design space optimization strategy was successfully applied to the development of LC screening methods allowing the detection of substandard or counterfeit medicines. Using the results of a unique experimental design, the design spaces of 5 potentially relevant HPLC methods have been developed, and transferred to an ultra high performance liquid chromatographic system to evaluate the robustness of the predicted DS while providing rapid methods of analysis. Moreover, one of the methods has been fully validated using the accuracy profile as decision tool, and was then used for the quantitative determination of three active ingredients and one impurity in a common and widely used pharmaceutical formulation. The method was applied to 5 pharmaceuticals sold in the Democratic Republic of Congo. None of these pharmaceuticals was found compliant to the European Medicines Agency specifications. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Solid State Kinetic Parameters and Chemical Mechanism of the Dehydration of CoCl2.6H2O.

    ERIC Educational Resources Information Center

    Ribas, Joan; And Others

    1988-01-01

    Presents an experimental example illustrating the most common methods for the determination of kinetic parameters. Discusses the different theories and equations to be applied and the mechanism derived from the kinetic results. (CW)

  12. 48 CFR 2913.301 - Governmentwide commercial purchase card.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... other methods of purchasing. However, the same legal restrictions apply to credit card purchases that.../Agency Purchase/Credit Card Program procedures. A number of the more common restrictions which... purchase card. 2913.301 Section 2913.301 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  13. 48 CFR 2913.301 - Governmentwide commercial purchase card.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... other methods of purchasing. However, the same legal restrictions apply to credit card purchases that.../Agency Purchase/Credit Card Program procedures. A number of the more common restrictions which... purchase card. 2913.301 Section 2913.301 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  14. 48 CFR 2913.301 - Governmentwide commercial purchase card.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... other methods of purchasing. However, the same legal restrictions apply to credit card purchases that.../Agency Purchase/Credit Card Program procedures. A number of the more common restrictions which... purchase card. 2913.301 Section 2913.301 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  15. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  16. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  17. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  18. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  19. Microwave processing heats up

    USDA-ARS?s Scientific Manuscript database

    Microwaves are a common appliance in many households. In the United States microwave heating is the third most popular domestic heating method food foods. Microwave heating is also a commercial food processing technology that has been applied for cooking, drying, and tempering foods. It's use in ...

  20. Use of Pseudophase TLC in Teaching Laboratories.

    ERIC Educational Resources Information Center

    Armstrong, Daniel W.; And Others

    1984-01-01

    Suggests that pseudophase liquid chromatography, which uses aqueous surfactant solutions instead of organic solvents for the mobile phase, can be substituted for thin-layer chromatography in the introductory organic course. Outlines the method as it applies to common separations in the laboratory. (JN)

  1. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  2. SSVEP recognition using common feature analysis in brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Network geometry inference using common neighbors

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri

    2015-08-01

    We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O (t4) running time to map a network of t nodes, versus O (t3) in the link-based method. But we also develop a hybrid method with O (t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O (t2) , without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections.

  4. Probabilistic fracture finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-01-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  5. Probabilistic fracture finite elements

    NASA Astrophysics Data System (ADS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-05-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  6. Population clustering based on copy number variations detected from next generation sequencing data.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping

    2014-08-01

    Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.

  7. A new method for locating changes in a tree reveals distinct nucleotide polymorphism vs. divergence patterns in mouse mitochondrial control region.

    PubMed

    Galtier, N; Boursot, P

    2000-03-01

    A new, model-based method was devised to locate nucleotide changes in a given phylogenetic tree. For each site, the posterior probability of any possible change in each branch of the tree is computed. This probabilistic method is a valuable alternative to the maximum parsimony method when base composition is skewed (i.e., different from 25% A, 25% C, 25% G, 25% T): computer simulations showed that parsimony misses more rare --> common than common --> rare changes, resulting in biased inferred change matrices, whereas the new method appeared unbiased. The probabilistic method was applied to the analysis of the mutation and substitution processes in the mitochondrial control region of mouse. Distinct change patterns were found at the polymorphism (within species) and divergence (between species) levels, rejecting the hypothesis of a neutral evolution of base composition in mitochondrial DNA.

  8. Archaeology Through Computational Linguistics: Inscription Statistics Predict Excavation Sites of Indus Valley Artifacts.

    PubMed

    Recchia, Gabriel L; Louwerse, Max M

    2016-11-01

    Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.

  9. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  10. Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon.

    PubMed

    Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo

    2013-01-01

    Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style.

  11. Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon

    PubMed Central

    Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo

    2013-01-01

    Background Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. Methodology A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Conclusions Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style. PMID:23826143

  12. Neurobehavioral Development of Common Marmoset Monkeys

    PubMed Central

    Schultz-Darken, Nancy; Braun, Katarina M.; Emborg, Marina E.

    2016-01-01

    Common marmoset (Callithrix jacchus) monkeys are a resource for biomedical research and their use is predicted to increase due to the suitability of this species for transgenic approaches. Identification of abnormal neurodevelopment due to genetic modification relies upon the comparison with validated patterns of normal behavior defined by unbiased methods. As scientists unfamiliar with nonhuman primate development are interested to apply genomic editing techniques in marmosets, it would be beneficial to the field that the investigators use validated methods of postnatal evaluation that are age and species appropriate. This review aims to analyze current available data on marmoset physical and behavioral postnatal development, describe the methods used and discuss next steps to better understand and evaluate marmoset normal and abnormal postnatal neurodevelopment PMID:26502294

  13. Is the common cold a clinical entity or a cultural concept?

    PubMed

    Eccles, R

    2013-03-01

    Common cold is the most common infectious disease of mankind and the term is widely used in the clinical literature as though it were a defined clinical syndrome. Clinical studies on this syndrome often use elaborate symptom scoring systems to diagnose a common cold. The symptom scores are based on a study conducted over 50 years ago to retrospectively diagnose experimental cold and this method cannot be applied to diagnosis of common cold in the community. Diagnosis of the common cold by virology is not feasible because of the number of viruses and the variability in the disease states caused by the viruses. Because of the familiarity of subjects with common cold and the variability in symptomatology it seems a more reasonable approach to use self-diagnosis of common cold for clinical research studies and accept that the common cold is a cultural concept and not a clinical entity.

  14. An expandable crosstalk reduction method for inline fiber Fabry-Pérot sensor array based on fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Ma, Lina; Hu, Zhengliang; Hu, Yongming

    2016-07-01

    The inline time division multiplexing (TDM) fiber Fabry-Pérot (FFP) sensor array based on fiber Bragg gratings (FBGs) is attractive for many applications. But the intrinsic multi-reflection (MR) induced crosstalk limits applications especially those needing high resolution. In this paper we proposed an expandable method for MR-induced crosstalk reduction. The method is based on complexing-exponent synthesis using the phase-generated carrier (PGC) scheme and the special common character of the impulse responses. The method could promote demodulation stability simultaneously with the reduction of MR-induced crosstalk. A polarization-maintaining 3-TDM experimental system with an FBG reflectivity of about 5 % was set up to validate the method. The experimental results showed that crosstalk reduction of 13 dB and 15 dB was achieved for sensor 2 and sensor 3 respectively when a signal was applied to the first sensor and crosstalk reduction of 8 dB was achieved for sensor 3 when a signal was applied to sensor 2. The demodulation stability of the applied signal was promoted as well. The standard deviations of the amplitude distributions of the demodulated signals were reduced from 0.0046 to 0.0021 for sensor 2 and from 0.0114 to 0.0044 for sensor 3. Because of the convenience of the linear operation of the complexing-exponent and according to the common character of the impulse response we found, the method can be effectively extended to the array with more TDM channels if the impulse response of the inline FFP sensor array with more TDM channels is derived. It offers potential to develop a low-crosstalk inline FFP sensor array using the PGC interrogation technique with relatively high reflectivity FBGs which can guarantee enough light power received by the photo-detector.

  15. Effects of nitrogen source and rate and method of fertilizer application on yield and fruit size in 'Bluecrop' highbush blueberry

    USDA-ARS?s Scientific Manuscript database

    A study was done to determine the effects of N source and rate and two common methods of fertilizer application on yield and fruit size in a maturing field of highbush blueberry. Plants were fertilized by drip fertigation or with granular fertilizer using urea or ammonium sulfate applied at a rate o...

  16. Exploring Eye Movements of Experienced and Novice Readers of Medical Texts Concerning the Cardiovascular System in Making a Diagnosis

    ERIC Educational Resources Information Center

    Vilppu, Henna; Mikkilä-Erdmann, Mirjamaija; Södervik, Ilona; Österholm-Matikainen, Erika

    2017-01-01

    This study used the eye-tracking method to explore how the level of expertise influences reading, and solving, two written patient cases on cardiac failure and pulmonary embolus. Eye-tracking is a fairly commonly used method in medical education research, but it has been primarily applied to studies analyzing the processing of visualizations, such…

  17. Concepts and methods in neuromodulation and functional electrical stimulation: an introduction.

    PubMed

    Holsheimer, J

    1998-04-01

    This article introduces two clinical fields in which stimulation is applied to the nervous system: neuromodulation and functional electrical stimulation. The concepts underlying these fields and their main clinical applications, as well as the methods and techniques used in each field, are described. Concepts and techniques common in one field that might be beneficial to the other are discussed. 1998 Blackwell Science, Inc.

  18. Oil seal effects and subsynchronous vibrations in high-speed compressors

    NASA Technical Reports Server (NTRS)

    Allaire, P. E.; Kocur, J. A., Jr.

    1985-01-01

    Oil seals are commonly used in high speed multistage compressors. If the oil seal ring becomes locked up against the fixed portion of the seal, high oil film crosscoupled stiffnesses can result. A method of analysis for determining if the oil seals are locked up or not is discussed. The method is then applied to an oil seal in a compressor with subsynchronous vibration problems.

  19. Non-destructive and destructive investigation of aged-in-the field carbon FRP-wrapped columns.

    DOT National Transportation Integrated Search

    2011-06-01

    The common practice of applying deicing salts on highway bridges increases the potential of reinforcing steel in these structures to experience extensive corrosion in the decks as well as the substructure. A new rehabilitation method which is believe...

  20. Generating enhanced site topography data to improve permeable pavement performance assessment methods - presentation

    EPA Science Inventory

    Permeable pavement surfaces are infiltration based stormwater control measures (SCM) commonly applied in parking lots to decrease impervious area and reduce runoff volume. Many are not optimally designed however, as little attention is given to draining a large enough contributin...

  1. Systemic and topical drugs for aging skin.

    PubMed

    Kockaert, Michael; Neumann, Martino

    2003-08-01

    The rejuvenation of aging skin is a common desire for our patients, and several options are available. Although there are some systemic methods, the most commonly used treatments for rejuvenation of the skin are applied topically. The most frequently used topical drugs include retinoids, alpha hydroxy acids (AHAs), vitamin C, beta hydroxy acids, anti-oxidants, and tocopherol. Combination therapy is frequently used; particularly common is the combination of retinoids and AHAs. Systemic therapies available include oral retinoids and vitamin C. Other available therapies such as chemical peels, face-lifts, collagen, and botulinum toxin injections are not discussed in this article.

  2. Determination of the authenticity of plastron-derived functional foods based on amino acid profiles analysed by MEKC.

    PubMed

    Li, Lin-Qiu; Baibado, Joewel T; Shen, Qing; Cheung, Hon-Yeung

    2017-12-01

    Plastron is a nutritive and superior functional food. Due to its limited supply yet enormous demands, some functional foods supposed to contain plastron may be forged with other substitutes. This paper reports a novel and simple method for determination of the authenticity of plastron-derived functional foods based on comparison of the amino acid (AA) profiles of plastron and its possible substitutes. By applying micellar electrokinetic chromatography (MEKC), 18 common AAs along with another 2 special AAs - hydroxyproline (Hyp) and hydroxylysine (Hyl) were detected in all plastron samples. Since chicken, egg, fish, milk, pork, nail and hair lacked of Hyp and Hyl, plastron could be easily distinguished. For those containing collagen, a statistical analysis technique - principal component analysis (PCA) was adopted and plastron was successfully distinguished. When applied the proposed method to authenticate turtle shell glue in the market, fake products were commonly found. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Design, implementation and application of distributed order PI control.

    PubMed

    Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan

    2013-05-01

    In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Novel Materials through Non-Hydrolytic Sol-Gel Processing: Negative Thermal Expansion Oxides and Beyond

    PubMed Central

    Lind, Cora; Gates, Stacy D.; Pedoussaut, Nathalie M.; Baiz, Tamam I.

    2010-01-01

    Low temperature methods have been applied to the synthesis of many advanced materials. Non-hydrolytic sol-gel (NHSG) processes offer an elegant route to stable and metastable phases at low temperatures. Excellent atomic level homogeneity gives access to polymorphs that are difficult or impossible to obtain by other methods. The NHSG approach is most commonly applied to the preparation of metal oxides, but can be easily extended to metal sulfides. Exploration of experimental variables allows control over product stoichiometry and crystal structure. This paper reviews the application of NHSG chemistry to the synthesis of negative thermal expansion oxides and selected metal sulfides.

  5. Multivariate analysis of longitudinal rates of change.

    PubMed

    Bryan, Matthew; Heagerty, Patrick J

    2016-12-10

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.

    1992-07-01

    Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.

  7. Method to Predict Tempering of Steels Under Non-isothermal Conditions

    NASA Astrophysics Data System (ADS)

    Poirier, D. R.; Kohli, A.

    2017-05-01

    A common way of representing the tempering responses of steels is with a "tempering parameter" that includes the effect of temperature and time on hardness after hardening. Such functions, usually in graphical form, are available for many steels and have been applied for isothermal tempering. In this article, we demonstrate that the method can be extended to non-isothermal conditions. Controlled heating experiments were done on three grades in order to verify the method.

  8. Appraisal of an Array TEM Method in Detecting a Mined-Out Area Beneath a Conductive Layer

    NASA Astrophysics Data System (ADS)

    Li, Hai; Xue, Guo-qiang; Zhou, Nan-nan; Chen, Wei-ying

    2015-10-01

    The transient electromagnetic method has been extensively used for the detection of mined-out area in China for the past few years. In the cases that the mined-out area is overlain by a conductive layer, the detection of the target layer is difficult with a traditional loop source TEM method. In order to detect the target layer in this condition, this paper presents a newly developed array TEM method, which uses a grounded wire source. The underground current density distribution and the responses of the grounded wire source TEM configuration are modeled to demonstrate that the target layer is detectable in this condition. The 1D OCCAM inversion routine is applied to the synthetic single station data and common middle point gather. The result reveals that the electric source TEM method is capable of recovering the resistive target layer beneath the conductive overburden. By contrast, the conductive target layer cannot be recovered unless the distance between the target layer and the conductive overburden is large. Compared with inversion result of the single station data, the inversion of common middle point gather can better recover the resistivity of the target layer. Finally, a case study illustrates that the array TEM method is successfully applied in recovering a water-filled mined-out area beneath a conductive overburden.

  9. Box-Cox transformation for QTL mapping.

    PubMed

    Yang, Runqing; Yi, Nengjun; Xu, Shizhong

    2006-01-01

    The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.

  10. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    PubMed

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  11. Coincident Detection Significance in Multimessenger Astronomy

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Burns, E.; Dal Canton, T.; Dent, T.; Eggenstein, H.-B.; Nielsen, A. B.; Prix, R.; Was, M.; Zhu, S. J.

    2018-06-01

    We derive a Bayesian criterion for assessing whether signals observed in two separate data sets originate from a common source. The Bayes factor for a common versus unrelated origin of signals includes an overlap integral of the posterior distributions over the common-source parameters. Focusing on multimessenger gravitational-wave astronomy, we apply the method to the spatial and temporal association of independent gravitational-wave and electromagnetic (or neutrino) observations. As an example, we consider the coincidence between the recently discovered gravitational-wave signal GW170817 from a binary neutron star merger and the gamma-ray burst GRB 170817A: we find that the common-source model is enormously favored over a model describing them as unrelated signals.

  12. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  13. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  14. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    PubMed Central

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  15. Comparison of methods for determining volatile compounds in cheese, milk, and whey powder

    USDA-ARS?s Scientific Manuscript database

    Solid phase microextraction (SPME) and gas chromatography-mass spectrometry (GC-MS) are commonly used for qualitative and quantitative analysis of volatile compounds in various dairy products, but selecting the proper procedures presents challenges. Heat is applied to drive volatiles from the samp...

  16. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  17. Apply Pesticides Correctly, A Guide for Commercial Applicators: Agricultural Pest Control -- Animal.

    ERIC Educational Resources Information Center

    Wamsley, Mary Ann, Ed.; Vermeire, Donna M., Ed.

    This guide contains basic information to meet specific standards for pesticide applicators. The text is concerned with the common pests of agricultural animals such as flies, ticks, bots, lice and mites. Methods for controlling these pests and appropriate pesticides are discussed. (CS)

  18. Differentiation of tea varieties using UV-Vis spectra and pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Palacios-Morillo, Ana; Alcázar, Ángela.; de Pablos, Fernando; Jurado, José Marcos

    2013-02-01

    Tea, one of the most consumed beverages all over the world, is of great importance in the economies of a number of countries. Several methods have been developed to classify tea varieties or origins based in pattern recognition techniques applied to chemical data, such as metal profile, amino acids, catechins and volatile compounds. Some of these analytical methods become tedious and expensive to be applied in routine works. The use of UV-Vis spectral data as discriminant variables, highly influenced by the chemical composition, can be an alternative to these methods. UV-Vis spectra of methanol-water extracts of tea have been obtained in the interval 250-800 nm. Absorbances have been used as input variables. Principal component analysis was used to reduce the number of variables and several pattern recognition methods, such as linear discriminant analysis, support vector machines and artificial neural networks, have been applied in order to differentiate the most common tea varieties. A successful classification model was built by combining principal component analysis and multilayer perceptron artificial neural networks, allowing the differentiation between tea varieties. This rapid and simple methodology can be applied to solve classification problems in food industry saving economic resources.

  19. Evaluating the efficiency of spectral resolution of univariate methods manipulating ratio spectra and comparing to multivariate methods: An application to ternary mixture in common cold preparation

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia

    2015-02-01

    Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.

  20. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  1. Topical dissolved oxygen penetrates skin: model and method.

    PubMed

    Roe, David F; Gibbins, Bruce L; Ladizinsky, Daniel A

    2010-03-01

    It has been commonly perceived that skin receives its oxygen supply from the internal circulation. However, recent investigations have shown that a significant amount of oxygen may enter skin from the external overlying surface. A method has been developed for measuring the transcutaneous penetration of human skin by oxygen as described herein. This method was used to determine both the depth and magnitude of penetration of skin by topically applied oxygen. An apparatus consisting of human skin samples interposed between a topical oxygen source and a fluid filled chamber that registered changes in dissolved oxygen. Viable human skin samples of variable thicknesses with and without epidermis were used to evaluate the depth and magnitude of oxygen penetration from either topical dissolved oxygen (TDO) or topical gaseous oxygen (TGO) devices. This model effectively demonstrates transcutaneous penetration of topically applied oxygen. Topically applied dissolved oxygen penetrates through >700 microm of human skin. Topically applied oxygen penetrates better though dermis than epidermis, and TDO devices deliver oxygen more effectively than TGO devices. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  2. A Survey of Commonly Applied Methods for Software Process Improvement

    DTIC Science & Technology

    1994-02-01

    conducted a controlled experimental study of the effectiveness of the method. They compared 10 cleanroom teams with 5 non -cleanroom teams working for six...Robert D. Austin Doctoral Candidate Carnegie Mellon University Daniel J. Paulish Resident Affiliate Siemens Corporate Research , Inc. Accesion For0...the U.S. Department of Defense. Copyright 0 1994 by Carnegie Mellon University. Copies of the documen are available from Research Access. Iinc., 800

  3. The application of systems thinking concepts, methods, and tools to global health practices: An analysis of case studies.

    PubMed

    Wilkinson, Jessica; Goff, Morgan; Rusoja, Evan; Hanson, Carl; Swanson, Robert Chad

    2018-06-01

    This review of systems thinking (ST) case studies seeks to compile and analyse cases from ST literature and provide practitioners with a reference for ST in health practice. Particular attention was given to (1) reviewing the frequency and use of key ST terms, methods, and tools in the context of health, and (2) extracting and analysing longitudinal themes across cases. A systematic search of databases was conducted, and a total of 36 case studies were identified. A combination of integrative and inductive qualitative approaches to analysis was used. Most cases identified took place in high-income countries and applied ST retrospectively. The most commonly used ST terms were agent/stakeholder/actor (n = 29), interdependent/interconnected (n = 28), emergence (n = 26), and adaptability/adaptation (n = 26). Common ST methods and tools were largely underutilized. Social network analysis was the most commonly used method (n = 4), and innovation or change management history was the most frequently used tool (n = 11). Four overarching themes were identified; the importance of the interdependent and interconnected nature of a health system, characteristics of leaders in a complex adaptive system, the benefits of using ST, and barriers to implementing ST. This review revealed that while much has been written about the potential benefits of applying ST to health, it has yet to completely transition from theory to practice. There is however evidence of the practical use of an ST lens as well as specific methods and tools. With clear examples of ST applications, the global health community will be better equipped to understand and address key health challenges. © 2017 John Wiley & Sons, Ltd.

  4. Evaluation of the impacts of climate change on disease vectors through ecological niche modelling.

    PubMed

    Carvalho, B M; Rangel, E F; Vale, M M

    2017-08-01

    Vector-borne diseases are exceptionally sensitive to climate change. Predicting vector occurrence in specific regions is a challenge that disease control programs must meet in order to plan and execute control interventions and climate change adaptation measures. Recently, an increasing number of scientific articles have applied ecological niche modelling (ENM) to study medically important insects and ticks. With a myriad of available methods, it is challenging to interpret their results. Here we review the future projections of disease vectors produced by ENM, and assess their trends and limitations. Tropical regions are currently occupied by many vector species; but future projections indicate poleward expansions of suitable climates for their occurrence and, therefore, entomological surveillance must be continuously done in areas projected to become suitable. The most commonly applied methods were the maximum entropy algorithm, generalized linear models, the genetic algorithm for rule set prediction, and discriminant analysis. Lack of consideration of the full-known current distribution of the target species on models with future projections has led to questionable predictions. We conclude that there is no ideal 'gold standard' method to model vector distributions; researchers are encouraged to test different methods for the same data. Such practice is becoming common in the field of ENM, but still lags behind in studies of disease vectors.

  5. Boosting Sensitivity in Liquid Chromatography–Fourier Transform Ion Cyclotron Resonance–Tandem Mass Spectrometry for Product Ion Analysis of Monoterpene Indole Alkaloids

    PubMed Central

    Nakabayashi, Ryo; Tsugawa, Hiroshi; Kitajima, Mariko; Takayama, Hiromitsu; Saito, Kazuki

    2015-01-01

    In metabolomics, the analysis of product ions in tandem mass spectrometry (MS/MS) is noteworthy to chemically assign structural information. However, the development of relevant analytical methods are less advanced. Here, we developed a method to boost sensitivity in liquid chromatography–Fourier transform ion cyclotron resonance–tandem mass spectrometry analysis (MS/MS boost analysis). To verify the MS/MS boost analysis, both quercetin and uniformly labeled 13C quercetin were analyzed, revealing that the origin of the product ions is not the instrument, but the analyzed compounds resulting in sensitive product ions. Next, we applied this method to the analysis of monoterpene indole alkaloids (MIAs). The comparative analyses of MIAs having indole basic skeleton (ajmalicine, catharanthine, hirsuteine, and hirsutine) and oxindole skeleton (formosanine, isoformosanine, pteropodine, isopteropodine, rhynchophylline, isorhynchophylline, and mitraphylline) identified 86 and 73 common monoisotopic ions, respectively. The comparative analyses of the three pairs of stereoisomers showed more than 170 common monoisotopic ions in each pair. This method was also applied to the targeted analysis of MIAs in Catharanthus roseus and Uncaria rhynchophylla to profile indole and oxindole compounds using the product ions. This analysis is suitable for chemically assigning features of the metabolite groups, which contributes to targeted metabolome analysis. PMID:26734034

  6. Pathway analysis with next-generation sequencing data.

    PubMed

    Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao

    2015-04-01

    Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.

  7. Update 2016: Considerations for Using Agile in DoD Acquisition

    DTIC Science & Technology

    2016-12-01

    What Is Agile? 4 2.1 Agile Manifesto and Principles—A Brief History 4 2.2 A Practical Definition 6 2.3 Example Agile Method 6 2.4 Example Agile...5.8 Team Composition 45 5.9 Culture 46 6 Conclusion 48 Appendix A: Examples of Agile Methods 50 Appendix B: Common Objections to Agile 53...thank all those who have contributed to our knowledge of apply- ing “other than traditional” methods for software system acquisition and management over

  8. Readout electronics for the GEM detector

    NASA Astrophysics Data System (ADS)

    Kasprowicz, G.; Czarski, T.; Chernyshova, M.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Jakubowska, K.; Karpinski, L.; Kierzkowski, K.; Kudla, I. M.; Pozniak, K.; Rzadkiewicz, J.; Salapa, Z.; Scholz, M.; Zabolotny, W.

    2011-10-01

    A novel approach to the Gas Electron Multiplier (GEM) detector readout is presented. Unlike commonly used methods, based on discriminators[2],[3] and analogue FIFOs[1], the method developed uses simultaneously sampling high speed ADCs and advanced FPGA-based processing logic to estimate the energy of every single photon. Such method is applied to every GEM strip signal. It is especially useful in case of crystal-based spectrometers for soft X-rays, where higher order reflections need to be identified and rejected[5].

  9. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  10. Configurations of Common Childhood Psychosocial Risk Factors

    ERIC Educational Resources Information Center

    Copeland, William; Shanahan, Lilly; Costello, E. Jane; Angold, Adrian

    2009-01-01

    Background: Co-occurrence of psychosocial risk factors is commonplace, but little is known about psychiatrically-predictive configurations of psychosocial risk factors. Methods: Latent class analysis (LCA) was applied to 17 putative psychosocial risk factors in a representative population sample of 920 children ages 9 to 17. The resultant class…

  11. Analyzing Longitudinal Item Response Data via the Pairwise Fitting Method

    ERIC Educational Resources Information Center

    Fu, Zhi-Hui; Tao, Jian; Shi, Ning-Zhong; Zhang, Ming; Lin, Nan

    2011-01-01

    Multidimensional item response theory (MIRT) models can be applied to longitudinal educational surveys where a group of individuals are administered different tests over time with some common items. However, computational problems typically arise as the dimension of the latent variables increases. This is especially true when the latent variable…

  12. 76 FR 12979 - Submission for OMB Review: Comment Request; Questionnaire Cognitive Interviewing and Pretesting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... cognitive interviews, focus groups, Pilot household interviews, and experimental research in laboratory and field settings, both for applied questionnaire evaluation and more basic research on response errors in surveys. The most common evaluation method is the cognitive interview, in which a questionnaire design...

  13. Quantitative Measures of Sustainability in Institutions of Higher Education

    ERIC Educational Resources Information Center

    Klein-Banai, Cynthia

    2010-01-01

    The measurement of sustainability for institutions, businesses, regions, and nations is a complex undertaking. There are many disciplinary approaches but sustainability is innately interdisciplinary and the challenge is to apply these approaches in a way that can best measure progress towards sustainability. The most common methods used by…

  14. Development of gas chromatographic methods for the analyses of organic carbonate-based electrolytes

    NASA Astrophysics Data System (ADS)

    Terborg, Lydia; Weber, Sascha; Passerini, Stefano; Winter, Martin; Karst, Uwe; Nowak, Sascha

    2014-01-01

    In this work, novel methods based on gas chromatography (GC) for the investigation of common organic carbonate-based electrolyte systems are presented, which are used in lithium ion batteries. The methods were developed for flame ionization detection (FID), mass spectrometric detection (MS). Further, headspace (HS) sampling for the investigation of solid samples like electrodes is reported. Limits of detection are reported for FID. Finally, the developed methods were applied to the electrolyte system of commercially available lithium ion batteries as well as on in-house assembled cells.

  15. Modular thought in the circuit analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    Applied to solve the problem of modular thought, provides a whole for simplification's method, the complex problems have become of, and the study of circuit is similar to the above problems: the complex connection between components, make the whole circuit topic solution seems to be more complex, and actually components the connection between the have rules to follow, this article mainly tells the story of study on the application of the circuit modular thought. First of all, this paper introduces the definition of two-terminal network and the concept of two-terminal network equivalent conversion, then summarizes the common source resistance hybrid network modular approach, containing controlled source network modular processing method, lists the common module, typical examples analysis.

  16. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  17. Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Larchev, Gregory V.; Lohn, Jason D.

    2006-01-01

    The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.

  18. Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianwei; Zhang, Yong

    2014-12-07

    We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less

  19. Identification of common coexpression modules based on quantitative network comparison.

    PubMed

    Jo, Yousang; Kim, Sanghyeon; Lee, Doheon

    2018-06-13

    Finding common molecular interactions from different samples is essential work to understanding diseases and other biological processes. Coexpression networks and their modules directly reflect sample-specific interactions among genes. Therefore, identification of common coexpression network or modules may reveal the molecular mechanism of complex disease or the relationship between biological processes. However, there has been no quantitative network comparison method for coexpression networks and we examined previous methods for other networks that cannot be applied to coexpression network. Therefore, we aimed to propose quantitative comparison methods for coexpression networks and to find common biological mechanisms between Huntington's disease and brain aging by the new method. We proposed two similarity measures for quantitative comparison of coexpression networks. Then, we performed experiments using known coexpression networks. We showed the validity of two measures and evaluated threshold values for similar coexpression network pairs from experiments. Using these similarity measures and thresholds, we quantitatively measured the similarity between disease-specific and aging-related coexpression modules and found similar Huntington's disease-aging coexpression module pairs. We identified similar Huntington's disease-aging coexpression module pairs and found that these modules are related to brain development, cell death, and immune response. It suggests that up-regulated cell signalling related cell death and immune/ inflammation response may be the common molecular mechanisms in the pathophysiology of HD and normal brain aging in the frontal cortex.

  20. Collaborative derivation of reference intervals for major clinical laboratory tests in Japan.

    PubMed

    Ichihara, Kiyoshi; Yomamoto, Yoshikazu; Hotta, Taeko; Hosogaya, Shigemi; Miyachi, Hayato; Itoh, Yoshihisa; Ishibashi, Midori; Kang, Dongchon

    2016-05-01

    Three multicentre studies of reference intervals were conducted recently in Japan. The Committee on Common Reference Intervals of the Japan Society of Clinical Chemistry sought to establish common reference intervals for 40 laboratory tests which were measured in common in the three studies and regarded as well harmonized in Japan. The study protocols were comparable with recruitment mostly from hospital workers with body mass index ≤28 and no medications. Age and sex distributions were made equal to obtain a final data size of 6345 individuals. Between-subgroup differences were expressed as the SD ratio (between-subgroup SD divided by SD representing the reference interval). Between-study differences were all within acceptable levels, and thus the three datasets were merged. By adopting SD ratio ≥0.50 as a guide, sex-specific reference intervals were necessary for 12 assays. Age-specific reference intervals for females partitioned at age 45 were required for five analytes. The reference intervals derived by the parametric method resulted in appreciable narrowing of the ranges by applying the latent abnormal values exclusion method in 10 items which were closely associated with prevalent disorders among healthy individuals. Sex- and age-related profiles of reference values, derived from individuals with no abnormal results in major tests, showed peculiar patterns specific to each analyte. Common reference intervals for nationwide use were developed for 40 major tests, based on three multicentre studies by advanced statistical methods. Sex- and age-related profiles of reference values are of great relevance not only for interpreting test results, but for applying clinical decision limits specified in various clinical guidelines. © The Author(s) 2015.

  1. Stakeholder analysis and social-biophysical interdependencies for common pool resource management: La Brava Wetland (Argentina) as a case study.

    PubMed

    Romanelli, Asunción; Massone, Héctor E; Escalante, Alicia H

    2011-09-01

    This article gives an account of the implementation of a stakeholder analysis framework at La Brava Wetland Basin, Argentina, in a common-pool resource (CPR) management context. Firstly, the context in which the stakeholder framework was implemented is described. Secondly, a four-step methodology is applied: (1) stakeholder identification, (2) stakeholder differentiation-categorization, (3) investigation of stakeholders' relationships, and (4) analysis of social-biophysical interdependencies. This methodology classifies stakeholders according to their level of influence on the system and their potential in the conservation of natural resources. The main influential stakeholders are La Brava Village residents and tourism-related entrepreneurs who are empowered to make the more important decisions within the planning process of the ecosystem. While these key players are seen as facilitators of change, there are other groups (residents of the inner basin and fishermen) which are seen mainly as key blockers. The applied methodology for the Stakeholder Analysis and the evaluation of social-biophysical interdependencies carried out in this article can be seen as an encouraging example for other experts in natural sciences to learn and use these methods developed in social sciences. Major difficulties and some recommendations of applying this method in the practice by non-experts are discussed.

  2. Separation of phytochemicals from Helichrysum italicum: An analysis of different isolation techniques and biological activity of prepared extracts.

    PubMed

    Maksimovic, Svetolik; Tadic, Vanja; Skala, Dejan; Zizovic, Irena

    2017-06-01

    Helichrysum italicum presents a valuable source of natural bioactive compounds. In this work, a literature review of terpenes, phenolic compounds, and other less common phytochemicals from H. italicum with regard to application of different separation methods is presented. Data including extraction/separation methods and experimental conditions applied, obtained yields, number of identified compounds, content of different compound groups, and analytical techniques applied are shown as corresponding tables. Numerous biological activities of both isolates and individual compounds are emphasized. In addition, the data reported are discussed, and the directions for further investigations are proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  4. Lattice Boltzmann computation of creeping fluid flow in roll-coating applications

    NASA Astrophysics Data System (ADS)

    Rajan, Isac; Kesana, Balashanker; Perumal, D. Arumuga

    2018-04-01

    Lattice Boltzmann Method (LBM) has advanced as a class of Computational Fluid Dynamics (CFD) methods used to solve complex fluid systems and heat transfer problems. It has ever-increasingly attracted the interest of researchers in computational physics to solve challenging problems of industrial and academic importance. In this current study, LBM is applied to simulate the creeping fluid flow phenomena commonly encountered in manufacturing technologies. In particular, we apply this novel method to simulate the fluid flow phenomena associated with the "meniscus roll coating" application. This prevalent industrial problem encountered in polymer processing and thin film coating applications is modelled as standard lid-driven cavity problem to which creeping flow analysis is applied. This incompressible viscous flow problem is studied in various speed ratios, the ratio of upper to lower lid speed in two different configurations of lid movement - parallel and anti-parallel wall motion. The flow exhibits interesting patterns which will help in design of roll coaters.

  5. Removing inorganics: Common methods have limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorg, T.J.

    1991-06-01

    When EPA sets a regulation (a maximum contaminant level) for a contaminant, it must also specify the best available technology (BAT) that can be used to remove the contaminant. Because the regulations apply to community water systems, the technologies selected are ones that are commonly used to treat community size water systems. Thus, EPA R and D program has focused its efforts on evaluating primarily community applied technologies such as conventional coagulation-filtration, lime softening, ion exchange, adsorption, and membrane process. When BAT is identified for a specific contaminant, frequently the BAT will be listed with its limitations because the processmore » is often not effective under all water quality conditions. The same limitations would also apply to POU/POE treatment. The paper discusses EPA's regulations on inorganic contaminants, the best available technologies cited by EPA, and the limitations of the processes. Using arsenic as an example, the impact of the contaminant chemistry and water quality on removals is presented.« less

  6. Comparative study of novel versus conventional two-wavelength spectrophotometric methods for analysis of spectrally overlapping binary mixture.

    PubMed

    Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2015-09-05

    Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Dithizone-modified graphene oxide nano-sheet as a sorbent for pre-concentration and determination of cadmium and lead ions in food.

    PubMed

    Moghadam Zadeh, Hamid Reza; Ahmadvand, Parvaneh; Behbahani, Ali; Amini, Mostafa M; Sayar, Omid

    2015-01-01

    Graphene oxide nano-sheet was modified with dithizone as a novel sorbent for selective pre-concentration and determination of Cd(II) and Pb(II) in food. The sorbent was characterised by various analytical methods and the effective parameters for Cd(II) and Pb(II) adsorption were optimised during this work. The high adsorption capacity and selectivity of this sorbent makes the method capable of fast determinations of the Cd(II) and Pb(II) content in complicated matrices even at μg l(-1) levels using commonly available instrumentation. The precision of this method was < 1.9% from 10 duplicate determinations and its accuracy verified using standard reference materials. Finally, this method was applied to the determination of Cd(II) and Pb(II) ions in common food samples and satisfactory results were obtained.

  8. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  9. A sampling framework for incorporating quantitative mass spectrometry data in protein interaction analysis.

    PubMed

    Tucker, George; Loh, Po-Ru; Berger, Bonnie

    2013-10-04

    Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.

  10. Multivariate Analysis of Longitudinal Rates of Change

    PubMed Central

    Bryan, Matthew; Heagerty, Patrick J.

    2016-01-01

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed by Roy and Lin [1]; Proust-Lima, Letenneur and Jacqmin-Gadda [2]; and Gray and Brookmeyer [3] among others. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, Gray and Brookmeyer [3] introduce an “accelerated time” method which assumes that covariates rescale time in longitudinal models for disease progression. In this manuscript we detail an alternative multivariate model formulation that directly structures longitudinal rates of change, and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. PMID:27417129

  11. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  12. Correcting for batch effects in case-control microbiome studies

    PubMed Central

    Gibbons, Sean M.; Duvallet, Claire

    2018-01-01

    High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016

  13. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    PubMed

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.

  14. Geological mapping by geobotanical and geophysical means: a case study from the Bükk Mountains (NE Hungary)

    NASA Astrophysics Data System (ADS)

    Németh, Norbert; Petho, Gabor

    2009-03-01

    Geological mapping of an unexposed area can be supported by indirect methods. Among these, the use of mushrooms as geobotanical indicators and the shallow-penetration electromagnetic VLF method proved to be useful in the Bükk Mountains. Mushrooms have not been applied to geological mapping before. Common species like Boletus edulis and Leccinum aurantiacum are correlated with siliciclastic and magmatic formations while Calocybe gambosa is correlated with limestone. The validity of this correlation observed in the eastern part of the Bükk Mts. was controlled on a site where there was an indicated (by the mushrooms only) but unexposed occurrence of siliciclastic rocks not mapped before. The extent and structure of this occurrence were explored with the VLF survey and a trial-and-error method was applied for the interpretation. This case study presented here demonstrates the effectiveness of the combination of these relatively simple and inexpensive methods.

  15. Computer-based objective quantitative assessment of pulmonary parenchyma via x-ray CT

    NASA Astrophysics Data System (ADS)

    Uppaluri, Renuka; McLennan, Geoffrey; Sonka, Milan; Hoffman, Eric A.

    1998-07-01

    This paper is a review of our recent studies using a texture- based tissue characterization method called the Adaptive Multiple Feature Method. This computerized method is automated and performs tissue classification based upon the training acquired on a set of representative examples. The AMFM has been applied to several different discrimination tasks including normal subjects, subjects with interstitial lung disease, smokers, asbestos-exposed subjects, and subjects with cystic fibrosis. The AMFM has also been applied to data acquired using different scanners and scanning protocols. The AMFM has shown to be successful and better than other existing techniques in discriminating the tissues under consideration. We demonstrate that the AMFM is considerably more sensitive and specific in characterizing the lung, especially in the presence of mixed pathology, as compared to more commonly used methods. Evidence is presented suggesting that the AMFM is highly sensitive to some of the earliest disease processes.

  16. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  17. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  18. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    NASA Technical Reports Server (NTRS)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The development of the momentum and continuity equations for natural convection in a fluid, a permeable medium, and in a binary alloy undergoing directional solidification will be presented. Finally, results for natural convection in a pure liquid, natural convection in a medium with a constant permeability, and for directional solidification will be presented.

  19. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  20. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models

    ERIC Educational Resources Information Center

    Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao

    2010-01-01

    Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…

  1. Pre-Service Teachers' Mindset Beliefs about Student Ability

    ERIC Educational Resources Information Center

    Gutshall, C. Anne

    2014-01-01

    Introduction: We all have beliefs about our ability or intelligence. The extent to which we believe ability is malleable (growth) or stable (fixed) is commonly referred to as our mindset. This research is designed to explore pre-service teachers' mindset beliefs as well as their beliefs when applied to hypothetical student scenarios. Method:…

  2. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  3. Designing Evaluations. 2012 Revision. Applied Research and Methods. GAO-12-208G

    ERIC Educational Resources Information Center

    US Government Accountability Office, 2012

    2012-01-01

    GAO assists congressional decision makers in their deliberations by furnishing them with analytical information on issues and options. Many diverse methodologies are needed to develop sound and timely answers to the questions the Congress asks. To provide GAO evaluators with basic information about the more commonly used methodologies, GAO's…

  4. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  5. Applying Structural Systems Thinking to Frame Perspectives on Social Work Innovation

    ERIC Educational Resources Information Center

    Stringfellow, Erin J.

    2017-01-01

    Objective: Innovation will be key to the success of the Grand Challenges Initiative in social work. A structural systems framework based in system dynamics could be useful for considering how to advance innovation. Method: Diagrams using system dynamics conventions were developed to link common themes across concept papers written by social work…

  6. Aggression and Tantrums in Children with Autism: A Review of Behavioral Treatments and Maintaining Variables

    ERIC Educational Resources Information Center

    Matson, Johnny

    2009-01-01

    Aggression and tantrums are common co-occurring problems with autism. Fortunately, positive developments in the treatment of these challenging and stigmatizing behaviors have been made recently with psychologically-based interventions. Evidence-based methods employ behavior modification, which is also often described as applied behavior analysis…

  7. Enumeration of sugars and sugar alcohols hydroxyl groups by aqueous-based acetylation and MALDI-TOF mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    A method is described for enumerating hydroxyl groups on analytes in aqueous media is described, and applied to some common polyalcohols (erythritol, mannitol, and xylitol) and selected carbohydrates. The analytes were derivatized in water with vinyl acetate in presence of sodium phosphate buffer. ...

  8. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...

  9. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...

  10. TOWARD DEVELOPMENT OF A COMMON SOFTWARE APPLICATION PROGRAMMING INTERFACE (API) FOR UNCERTAINTY, SENSITIVITY, AND PARAMETER ESTIMATION METHODS AND TOOLS

    EPA Science Inventory

    The final session of the workshop considered the subject of software technology and how it might be better constructed to support those who develop, evaluate, and apply multimedia environmental models. Two invited presentations were featured along with an extended open discussio...

  11. Improving Survey Methods with Cognitive Interviews in Small- and Medium-Scale Evaluations

    ERIC Educational Resources Information Center

    Ryan, Katherine; Gannon-Slater, Nora; Culbertson, Michael J.

    2012-01-01

    Findings derived from self-reported, structured survey questionnaires are commonly used in evaluation and applied research to inform policy-making and program decisions. Although there are a variety of issues related to the quality of survey evidence (e.g., sampling precision), the validity of response processes--how respondents process thoughts…

  12. Comparison of Pressures Applied by Digital Tourniquets in the Emergency Department

    PubMed Central

    Lahham, Shadi; Tu, Khoa; Ni, Mickey; Tran, Viet; Lotfipour, Shahram; Anderson, Craig L.; Fox, J Christian

    2011-01-01

    Background: Digital tourniquets used in the emergency department have been scrutinized due to complications associated with their use, including neurovascular injury secondary to excessive tourniquet pressure and digital ischemia caused by a forgotten tourniquet. To minimize these risks, a conspicuous tourniquet that applies the least amount of pressure necessary to maintain hemostasis is recommended. Objective: To evaluate the commonly used tourniquet methods, the Penrose drain, rolled glove, the Tourni-cot and the T-Ring, to determine which applies the lowest pressure while consistently preventing digital perfusion. Methods: We measured the circumference of selected digits of 200 adult males and 200 adult females to determine the adult finger size range. We then measured the pressure applied to four representative finger sizes using a pressure monitor and assessed the ability of each method to prevent digital blood flow with a pulse oximeter. Results: We selected four representative finger sizes: 45mm, 65mm, 70mm, and 85mm to test the different tourniquet methods. All methods consistently prevented digital perfusion. The highest pressure recorded for the Penrose drain was 727 mmHg, the clamped rolled glove 439, the unclamped rolled glove 267, Tourni-cot 246, while the T-Ring had the lowest at 151 mmHg and least variable pressures of all methods. Conclusion: All tested methods provided adequate hemostasis. Only the Tourni-cot and T-Ring provided hemostasis at safe pressures across all digit sizes with the T-Ring having a lower overall average pressure. PMID:21691536

  13. Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research.

    PubMed

    Yalch, Matthew M

    2016-03-01

    Several contemporary researchers have noted the virtues of Bayesian methods of data analysis. Although debates continue about whether conventional or Bayesian statistics is the "better" approach for researchers in general, there are reasons why Bayesian methods may be well suited to the study of psychological trauma in particular. This article describes how Bayesian statistics offers practical solutions to the problems of data non-normality, small sample size, and missing data common in research on psychological trauma. After a discussion of these problems and the effects they have on trauma research, this article explains the basic philosophical and statistical foundations of Bayesian statistics and how it provides solutions to these problems using an applied example. Results of the literature review and the accompanying example indicates the utility of Bayesian statistics in addressing problems common in trauma research. Bayesian statistics provides a set of methodological tools and a broader philosophical framework that is useful for trauma researchers. Methodological resources are also provided so that interested readers can learn more. (c) 2016 APA, all rights reserved).

  14. Discovery of cancer common and specific driver gene sets

    PubMed Central

    2017-01-01

    Abstract Cancer is known as a disease mainly caused by gene alterations. Discovery of mutated driver pathways or gene sets is becoming an important step to understand molecular mechanisms of carcinogenesis. However, systematically investigating commonalities and specificities of driver gene sets among multiple cancer types is still a great challenge, but this investigation will undoubtedly benefit deciphering cancers and will be helpful for personalized therapy and precision medicine in cancer treatment. In this study, we propose two optimization models to de novo discover common driver gene sets among multiple cancer types (ComMDP) and specific driver gene sets of one certain or multiple cancer types to other cancers (SpeMDP), respectively. We first apply ComMDP and SpeMDP to simulated data to validate their efficiency. Then, we further apply these methods to 12 cancer types from The Cancer Genome Atlas (TCGA) and obtain several biologically meaningful driver pathways. As examples, we construct a common cancer pathway model for BRCA and OV, infer a complex driver pathway model for BRCA carcinogenesis based on common driver gene sets of BRCA with eight cancer types, and investigate specific driver pathways of the liquid cancer lymphoblastic acute myeloid leukemia (LAML) versus other solid cancer types. In these processes more candidate cancer genes are also found. PMID:28168295

  15. Self Calibrated Wireless Distributed Environmental Sensory Networks

    PubMed Central

    Fishbain, Barak; Moreno-Centeno, Erick

    2016-01-01

    Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art. PMID:27098279

  16. Testing Common Envelopes on Double White Dwarf Binaries

    NASA Astrophysics Data System (ADS)

    Nandez, Jose L. A.; Ivanova, Natalia; Lombardi, James C., Jr.

    2015-06-01

    The formation of a double white dwarf binary likely involves a common envelope (CE) event between a red giant and a white dwarf (WD) during the most recent episode of Roche lobe overflow mass transfer. We study the role of recombination energy with hydrodynamic simulations of such stellar interactions. We find that the recombination energy helps to expel the common envelope entirely, while if recombination energy is not taken into account, a significant fraction of the common envelope remains bound. We apply our numerical methods to constrain the progenitor system for WD 1101+364 - a double WD binary that has well-measured mass ratio of q=0.87±0.03 and an orbital period of 0.145 days. Our best-fit progenitor for the pre-common envelope donor is a 1.5 ⊙ red giant.

  17. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  18. Two complementary reversed-phase separations for comprehensive coverage of the semipolar and nonpolar metabolome.

    PubMed

    Naser, Fuad J; Mahieu, Nathaniel G; Wang, Lingjue; Spalding, Jonathan L; Johnson, Stephen L; Patti, Gary J

    2018-02-01

    Although it is common in untargeted metabolomics to apply reversed-phase liquid chromatography (RPLC) and hydrophilic interaction liquid chromatography (HILIC) methods that have been systematically optimized for lipids and central carbon metabolites, here we show that these established protocols provide poor coverage of semipolar metabolites because of inadequate retention. Our objective was to develop an RPLC approach that improved detection of these metabolites without sacrificing lipid coverage. We initially evaluated columns recently released by Waters under the CORTECS line by analyzing 47 small-molecule standards that evenly span the nonpolar and semipolar ranges. An RPLC method commonly used in untargeted metabolomics was considered a benchmarking reference. We found that highly nonpolar and semipolar metabolites cannot be reliably profiled with any single method because of retention and solubility limitations of the injection solvent. Instead, we optimized a multiplexed approach using the CORTECS T3 column to analyze semipolar compounds and the CORTECS C 8 column to analyze lipids. Strikingly, we determined that combining these methods allowed detection of 41 of the total 47 standards, whereas our reference RPLC method detected only 10 of the 47 standards. We then applied credentialing to compare method performance at the comprehensive scale. The tandem method showed more than a fivefold increase in credentialing coverage relative to our RPLC benchmark. Our results demonstrate that comprehensive coverage of metabolites amenable to reversed-phase separation necessitates two reconstitution solvents and chromatographic methods. Thus, we suggest complementing HILIC methods with a dual T3 and C 8 RPLC approach to increase coverage of semipolar metabolites and lipids for untargeted metabolomics. Graphical abstract Analysis of semipolar and nonpolar metabolites necessitates two reversed-phase chromatography (RPLC) methods, which extend metabolome coverage more than fivefold for untargeted profiling. HILIC hydrophilic interaction liquid chromatography.

  19. An inquiry into computer understanding

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter

    1988-01-01

    The paper examines issues connected with the choice of the best method for representing and reasoning about common sense. McDermott (1978) has shown that a direct translation of common sense reasoning into logical form leads to insurmountable difficulties. It is shown, in the present work, that if Bayesian probability is used instead of logic as the language of such reasoning, none of the technical difficulties found in using logic arise. Bayesian inference is applied to a simple example of linguistic information to illustrate the potential of this type of inference for artificial intelligence.

  20. Determining the prevalence of inv-positive and ail-positive Yersinia enterocolitica in pig tonsils using PCR and culture methods.

    PubMed

    Stachelska, Milena Alicja

    2017-01-01

    Yersiniosis is believed to be the third most common intestinal zoonosis in the European Union, after campylobacteriosis and salmonellosis. Yersinia enterocolitica is the most common species responsible for human infections. Pigs are regarded as the biggest reservoir of pathogenic Y. enterocolitica strains, which are mainly isolated from pig tonsils. The aim of this paper is to examine the prevalence of inv-positive and ail-positive Y. enterocolitica in pigs which were slaughtered in a Polish abattoir. Real-time PCR and culture methods were used to assess the prevalence of patho- genic Y. enterocolitica strains in pig tonsils. Real-time PCR was applied to detect inv-positive and ail-positive Y. enterocolitica. Y. enterocolitica was also isolated by applying direct plating, unselective (tryptic soy broth) and selective (irgasan-ticarcillin-potassium chlorate bouillon) enrichment. A total of 180 pigs were studied, of which 85% and 32% respectively were found to be infected with inv-positive and ail-positive Y. enterocolitica. The 92 inv-positive and ail-positive isolates, from 57 culture- positive tonsils, underwent bio- and serotyping. The most common was bioserotype 4/O:3, which was found in 53 (93%) out of 57 culture-positive tonsils. Strains of bioserotypes 2/O:5, 2/O:9 and 2/O:5.27 occurred in significantly lower numbers. The prevalence of inv-positive and ail-positive Y. enterocolitica was found to be high in the ton- sils of slaughtered pigs, using real-time PCR. The real-time PCR method for the detection and identification of pathogenic Y. enterocolitica is sensitive and specific, which has been verified by specificity and sensitivity tests using the pure cultures. Serotypes were distinguished from each other using PCR serotyping. The PCR method was essential in forming our conclusions.

  1. JRmGRN: Joint reconstruction of multiple gene regulatory networks with common hub genes using data from multiple tissues or conditions.

    PubMed

    Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong

    2018-04-30

    Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.

  2. Detection of Unknown Crypts under the Floor in the Holy Trinity Church (Dominican Monastery) in Krakow, Poland

    NASA Astrophysics Data System (ADS)

    Strzępowicz, Anna; Łyskowski, Mikołaj; Ziętek, Jerzy; Tomecka-Suchoń, Sylwia

    2018-03-01

    The GPR surveying method belongs to non-invasive and quick geophysical methods, applied also in archaeological prospection. It allows for detecting archaeological artefacts buried under historical layers, and also those which can be found within buildings of historical value. Most commonly, just as in this particular case, it is used in churches, where other non-invasive localisation methods cannot be applied. In a majority of cases, surveys bring about highly positive results, enabling the site and size of a specific object to be indicated. A good example are the results obtained from the measurements carried out in the Basilica of Holy Trinity, belonging to the Dominican Monastery in Krakow. They allowed for confirming the location of the already existing crypts and for indicating so-far unidentified objects.

  3. Microbial sequencing methods for monitoring of anaerobic treatment of antibiotics to optimize performance and prevent system failure.

    PubMed

    Aydin, Sevcan

    2016-06-01

    As a result of developments in molecular technologies and the use of sequencing technologies, the analyses of the anaerobic microbial community in biological treatment process has become increasingly prevalent. This review examines the ways in which microbial sequencing methods can be applied to achieve an extensive understanding of the phylogenetic and functional characteristics of microbial assemblages in anaerobic reactor if the substrate is contaminated by antibiotics which is one of the most important toxic compounds. It will discuss some of the advantages and disadvantages associated with microbial sequencing techniques that are more commonly employed and will assess how a combination of the existing methods may be applied to develop a more comprehensive understanding of microbial communities and improve the validity and depth of the results for the enhancement of the stability of anaerobic reactors.

  4. Ground roll attenuation using polarization analysis in the t-f-k domain

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, Y.

    2017-07-01

    S waves travel slower than P waves and have a lower dominant frequency. Therefore, applying common techniques such as time-frequency filtering and f-k filtering to separate S waves from ground roll is difficult because ground roll is also characterized by slow velocity and low frequency. In this study, we present a method for attenuating ground roll using a polarization filtering method based on the t-f-k transform. We describe the particle motion of the waves by complex vector signals. Each pair of frequency components, whose frequencies have the same absolute value but different signs, of the complex signal indicate an elliptical or linear motion. The polarization parameters of the elliptical or linear motion are explicitly related to the two Fourier coefficients. We then extend these concepts to the t-f-k domain and propose a polarization filtering method for ground roll attenuation based on the t-f-k transform. The proposed approach can define automatically the time-varying reject zones on the f-k panel at different times as a function of the reciprocal ellipticity. Four attributes, time, frequency, apparent velocity and polarization are used to identify and extract the ground roll simultaneously. Thus, the ground roll and body waves can be separated as long as they are dissimilar in one of these attributes. We compare our method with commonly used filtering techniques by applying the methods to synthetic and real seismic data. The results indicate that our method can attenuate ground roll while preserving body waves more effectively than the other methods.

  5. Possibility of wax control techniques in Indonesian oil fields

    NASA Astrophysics Data System (ADS)

    Abdurrahman, M.; Ferizal, F. H.; Husna, U. Z.; Pangaribuan, L.

    2018-03-01

    Wax is one of the common problem which can reduce the oil production, especially for the reservoir with high paraffin content case. When the temperature of crude oil is lower than pour point, wax molecules can begin rapidly precipitated. The impacts of this problem are the clogging of production equipment, sealing off the pores in the reservoir, and decreasing production flow rate. In order to solve the wax problem, several methods have been applied in some oil fields in the world. For example, chemical methods in Jiangsu field (China) and Mumbai High field (India), hot water in Mangala field (India), magnetic method in Daqing field (China), water-dispersible in Bakken basin (US), and microbial in Jidong field (China). In general, the various crude oils present in the Indonesia contain wax content between 10%-39% and pour point of 22°C-49°C. Hot water and chemical method are commonly used to solve wax problems in Indonesian oil fields. However, the primary solution is magnetic method, and the secondary solution is water dispersible.

  6. A new colorimetric DPPH• scavenging activity method with no need for a spectrophotometer applied on synthetic and natural antioxidants and medicinal herbs.

    PubMed

    Akar, Zeynep; Küçük, Murat; Doğan, Hacer

    2017-12-01

    2,2-Diphenyl-1-picrylhydrazyl (DPPH • ) radical scavenging, the most commonly used antioxidant method with more than seventeen thousand articles cited, is very practical; however, as with most assays, it has the major disadvantage of dependence on a spectrophotometer. To overcome this drawback, the colorimetric determination of the antioxidant activity using a scanner and freely available Image J software was developed. In this new method, the mixtures of solutions of DPPH • and standard antioxidants or extracts of common medicinal herbs were dropped onto TLC plates, after an incubation period. The spot images were evaluated with Image J software to determine CSC 50 values, the sample concentrations providing 50% colour reduction, which were very similar with the SC 50 values obtained with spectrophotometric method. The advantages of the new method are the use of lower amounts of reagents and solvents, no need for costly spectrophotometers, and thus significantly lowered costs, and convenient implementation in any environment and situation.

  7. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  8. Enhancing healthcare process design with human factors engineering and reliability science, part 2: applying the knowledge to clinical documentation systems.

    PubMed

    Boston-Fleischhauer, Carol

    2008-02-01

    The demand to redesign healthcare processes that achieve efficient, effective, and safe results is never-ending. Part 1 of this 2-part series introduced human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare organizations. In part 2, the author applies this knowledge to one of the most common operational processes in healthcare: clinical documentation. Specific implementation strategies and anticipated results are discussed, along with organizational challenges and recommended executive responses.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, W.W.; Sullivan, H.H.

    Electroless nicke-plate characteristics are substantially influenced by percent phosphorous concentrations. Available ASTM analytical methods are designed for phosphorous concentrations of less than one percent compared to the 4.0 to 20.0% concentrations common in electroless nickel plate. A variety of analytical adaptations are applied through the industry resulting in poor data continuity. This paper presents a statistical comparison of five analytical methods and recommends accurate and precise procedures for use in percent phosphorous determinations in electroless nickel plate. 2 figures, 1 table.

  10. Discovering Single Nucleotide Polymorphisms Regulating Human Gene Expression Using Allele Specific Expression from RNA-seq Data

    PubMed Central

    Kang, Eun Yong; Martin, Lisa J.; Mangul, Serghei; Isvilanonda, Warin; Zou, Jennifer; Ben-David, Eyal; Han, Buhm; Lusis, Aldons J.; Shifman, Sagiv; Eskin, Eleazar

    2016-01-01

    The study of the genetics of gene expression is of considerable importance to understanding the nature of common, complex diseases. The most widely applied approach to identifying relationships between genetic variation and gene expression is the expression quantitative trait loci (eQTL) approach. Here, we increased the computational power of eQTL with an alternative and complementary approach based on analyzing allele specific expression (ASE). We designed a novel analytical method to identify cis-acting regulatory variants based on genome sequencing and measurements of ASE from RNA-sequencing (RNA-seq) data. We evaluated the power and resolution of our method using simulated data. We then applied the method to map regulatory variants affecting gene expression in lymphoblastoid cell lines (LCLs) from 77 unrelated northern and western European individuals (CEU), which were part of the HapMap project. A total of 2309 SNPs were identified as being associated with ASE patterns. The SNPs associated with ASE were enriched within promoter regions and were significantly more likely to signal strong evidence for a regulatory role. Finally, among the candidate regulatory SNPs, we identified 108 SNPs that were previously associated with human immune diseases. With further improvements in quantifying ASE from RNA-seq, the application of our method to other datasets is expected to accelerate our understanding of the biological basis of common diseases. PMID:27765809

  11. Probing plasmodesmata function with biochemical inhibitors.

    PubMed

    White, Rosemary G

    2015-01-01

    To investigate plasmodesmata (PD) function, a useful technique is to monitor the effect on cell-to-cell transport of applying an inhibitor of a physiological process, protein, or other cell component of interest. Changes in PD transport can then be monitored in one of several ways, most commonly by measuring the cell-to-cell movement of fluorescent tracer dyes or of free fluorescent proteins. Effects on PD structure can be detected in thin sections of embedded tissue observed using an electron microscope, most commonly a Transmission Electron Microscope (TEM). This chapter outlines commonly used inhibitors, methods for treating different tissues, how to detect altered cell-to-cell transport and PD structure, and important caveats.

  12. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. II. The rejection of common mode forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.

    'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way bymore » varying the spin frequency of the GGG rotor.« less

  13. Method and apparatus for removal of gaseous, liquid and particulate contaminants from molten metals

    DOEpatents

    Hobson, D.O.; Alexeff, I.; Sikka, V.K.

    1987-08-10

    Method and apparatus for removal of nonelectrically-conducting gaseous, liquid, and particulate contaminants from molten metal compositions by applying a force thereto. The force (commonly referred to as the Lorentz Force) exerted by simultaneous application of an electric field and a magnetic field on a molten conductor causes an increase, in the same direction as the force, in the apparent specific gravity thereof, but does not affect the nonconducting materials. This difference in apparent densities cause the nonconducting materials to ''float'' in the opposite direction from the Lorentz Force at a rapid rate. Means are further provided for removal of the contaminants and prevention of stirring due to rotational forces generated by the applied fields. 6 figs.

  14. Method and apparatus for removal of gaseous, liquid and particulate contaminants from molten metals

    DOEpatents

    Hobson, David O.; Alexeff, Igor; Sikka, Vinod K.

    1988-01-01

    Method and apparatus for removal of nonelectrically-conducting gaseous, liquid, and particulate contaminants from molten metal compositions by applying a force thereto. The force (commonly referred to as the Lorentz Force) exerted by simultaneous application of an electric field and a magnetic field on a molten conductor causes an increase, in the same direction as the force, in the apparent specific gravity thereof, but does not affect the nonconducting materials. This difference in apparent densities cause the nonconducting materials to "float" in the opposite direction from the Lorentz Force at a rapid rate. Means are further provided for removal of the contaminants and prevention of stirring due to rotational forces generated by the applied fields.

  15. BOREHOLE NEUTRON ACTIVATION: THE RARE EARTHS.

    USGS Publications Warehouse

    Mikesell, J.L.; Senftle, F.E.

    1987-01-01

    Neutron-induced borehole gamma-ray spectroscopy has been widely used as a geophysical exploration technique by the petroleum industry, but its use for mineral exploration is not as common. Nuclear methods can be applied to mineral exploration, for determining stratigraphy and bed correlations, for mapping ore deposits, and for studying mineral concentration gradients. High-resolution detectors are essential for mineral exploration, and by using them an analysis of the major element concentrations in a borehole can usually be made. A number of economically important elements can be detected at typical ore-grade concentrations using this method. Because of the application of the rare-earth elements to high-temperature superconductors, these elements are examined in detail as an example of how nuclear techniques can be applied to mineral exploration.

  16. OARSI Clinical Trials Recommendations: Hand imaging in clinical trials in osteoarthritis.

    PubMed

    Hunter, D J; Arden, N; Cicuttini, F; Crema, M D; Dardzinski, B; Duryea, J; Guermazi, A; Haugen, I K; Kloppenburg, M; Maheu, E; Miller, C G; Martel-Pelletier, J; Ochoa-Albíztegui, R E; Pelletier, J-P; Peterfy, C; Roemer, F; Gold, G E

    2015-05-01

    Tremendous advances have occurred in our understanding of the pathogenesis of hand osteoarthritis (OA) and these are beginning to be applied to trials targeted at modification of the disease course. The purpose of this expert opinion, consensus driven exercise is to provide detail on how one might use and apply hand imaging assessments in disease modifying clinical trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, sequences artifacts); quality assurance/control procedures; measurement methods; measurement performance (reliability, responsiveness, validity); recommendations for trials; and research recommendations. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  17. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  18. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  19. Intra-grain Common Pb Correction and Detrital Apatite U-Pb Dating via LA-ICPMS Depth Profiling

    NASA Astrophysics Data System (ADS)

    Boyd, P. D.; Galster, F.; Stockli, D. F.

    2017-12-01

    Apatite is a common accessory phase in igneous and sedimentary rocks. While apatite is widely employed as a low-temperature thermochronometric tool, it has been increasingly utilized to constrain moderate temperature cooling histories by U-Pb dating. Apatite U-Pb is characterized by a thermal sensitivity window of 375-550°C. This unique temperature window recorded by the apatite U-Pb system, and the near-ubiquitous presence of apatite in igneous and clastic sedimentary rocks makes it a powerful tool able to illuminate mid-crustal tectono-thermal processes. However, as apatite incorporates only modest amounts of U and Th (1-10s of ppm) the significant amounts of non-radiogenic "common" Pb incorporated during its formation presents a major hurdle for apatite U-Pb dating. In bedrock samples common Pb in apatite can be corrected for by the measurement of Pb in a cogenetic mineral phase, such as feldspar, that does not incorporate U or from determination of a common Pb composition from multiple analyses in Tera-Wasserburg space. While these methods for common Pb correction in apatite can work for igneous samples, they cannot be applied to detrital apatite in sedimentary rocks with variable common Pb compositions. The obstacle of common Pb in apatite has hindered the application of detrital apatite U-Pb dating in provenance studies, despite the fact that it would be a powerful tool. This study presents a new method for the in situ correction of common Pb in apatite through the utilization of novel LA-ICP-MS depth profiling, which can recover U-Pb ratios at micron-scale spatial resolution during ablation of a grain. Due to the intra-grain U variability in apatite, a mixing line for a single grain can be generated in Tera-Wasserburg Concordia space. As a case study, apatite from a Variscan alpine granite were analyzed using both the single and multi-grain method, with both methods giving identical results. As a second case study the intra-grain method was then performed on detrital apatite from the Swiss Northern Alpine Foreland Basin, where the common Pb composition and age spectra of detrital apatite grains were elucidated. The novel intra-grain apatite method enables the correction for common Pb in detrital apatite, making it feasible to incorporate detrital apatite U-Pb dating in provenance and source-to-sink studies.

  20. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    PubMed

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  1. Physical and Clinical Evaluation of Hip Spica Cast applied with Three-slab Technique using Fibreglass Material

    PubMed Central

    Bitar, KM; Ferdhany, ME; Saw, A

    2016-01-01

    Introduction: Hip spica casting is an important component of treatment for developmental dysplasia of the hip (DDH) and popular treatment method for femur fractures in children. Breakage at the hip region is a relatively common problem of this cast. We have developed a three-slab technique of hip spica application using fibreglass as the cast material. The purpose of this review was to evaluate the physical durability of the spica cast and skin complications with its use. Methodology: A retrospective review of children with various conditions requiring hip spica immobilisation which was applied using our method. Study duration was from 1st of January 2014 until 31st December 2015. Our main outcomes were cast breakage and skin complications. For children with hip instability, the first cast would be changed after one month, and the second cast about two months later. Results: Twenty-one children were included, with an average age of 2.2 years. The most common indication for spica immobilisation was developmental dysplasia of the hip. One child had skin irritation after spica application. No spica breakage was noted. Conclusion: This study showed that the three-slab method of hip spica cast application using fibreglass material was durable and safe with low risk of skin complications. PMID:28553442

  2. Participatory Design in Gerontechnology: A Systematic Literature Review.

    PubMed

    Merkel, Sebastian; Kucharski, Alexander

    2018-05-19

    Participatory design (PD) is widely used within gerontechnology but there is no common understanding about which methods are used for what purposes. This review aims to examine what different forms of PD exist in the field of gerontechnology and how these can be categorized. We conducted a systematic literature review covering several databases. The search strategy was based on 3 elements: (1) participatory methods and approaches with (2) older persons aiming at developing (3) technology for older people. Our final review included 26 studies representing a variety of technologies designed/developed and methods/instruments applied. According to the technologies, the publications reviewed can be categorized in 3 groups: Studies that (1) use already existing technology with the aim to find new ways of use; (2) aim at creating new devices; (3) test and/or modify prototypes. The implementation of PD depends on the questions: Why a participatory approach is applied, who is involved as future user(s), when those future users are involved, and how they are incorporated into the innovation process. There are multiple ways, methods, and instruments to integrate users into the innovation process. Which methods should be applied, depends on the context. However, most studies do not evaluate if participatory approaches will lead to a better acceptance and/or use of the co-developed products. Therefore, participatory design should follow a comprehensive strategy, starting with the users' needs and ending with an evaluation if the applied methods have led to better results.

  3. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    PubMed

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Optimizing Estimates of Instantaneous Heart Rate from Pulse Wave Signals with the Synchrosqueezing Transform.

    PubMed

    Wu, Hau-Tieng; Lewis, Gregory F; Davila, Maria I; Daubechies, Ingrid; Porges, Stephen W

    2016-10-17

    With recent advances in sensor and computer technologies, the ability to monitor peripheral pulse activity is no longer limited to the laboratory and clinic. Now inexpensive sensors, which interface with smartphones or other computer-based devices, are expanding into the consumer market. When appropriate algorithms are applied, these new technologies enable ambulatory monitoring of dynamic physiological responses outside the clinic in a variety of applications including monitoring fatigue, health, workload, fitness, and rehabilitation. Several of these applications rely upon measures derived from peripheral pulse waves measured via contact or non-contact photoplethysmography (PPG). As technologies move from contact to non-contact PPG, there are new challenges. The technology necessary to estimate average heart rate over a few seconds from a noncontact PPG is available. However, a technology to precisely measure instantaneous heat rate (IHR) from non-contact sensors, on a beat-to-beat basis, is more challenging. The objective of this paper is to develop an algorithm with the ability to accurately monitor IHR from peripheral pulse waves, which provides an opportunity to measure the neural regulation of the heart from the beat-to-beat heart rate pattern (i.e., heart rate variability). The adaptive harmonic model is applied to model the contact or non-contact PPG signals, and a new methodology, the Synchrosqueezing Transform (SST), is applied to extract IHR. The body sway rhythm inherited in the non-contact PPG signal is modeled and handled by the notion of wave-shape function. The SST optimizes the extraction of IHR from the PPG signals and the technique functions well even during periods of poor signal to noise. We contrast the contact and non-contact indices of PPG derived heart rate with a criterion electrocardiogram (ECG). ECG and PPG signals were monitored in 21 healthy subjects performing tasks with different physical demands. The root mean square error of IHR estimated by SST is significantly better than commonly applied methods such as autoregressive (AR) method. In the walking situation, while AR method fails, SST still provides a reasonably good result. The SST processed PPG data provided an accurate estimate of the ECG derived IHR and consistently performed better than commonly applied methods such as autoregressive method.

  5. Technological advances in diagnostic testing for von Willebrand disease: new approaches and challenges.

    PubMed

    Hayward, C P M; Moffat, K A; Graf, L

    2014-06-01

    Diagnostic tests for von Willebrand disease (VWD) are important for the assessment of VWD, which is a commonly encountered bleeding disorder worldwide. Technical innovations have been applied to improve the precision and lower limit of detection of von Willebrand factor (VWF) assays, including the ristocetin cofactor activity assay (VWF:RCo) that uses the antibiotic ristocetin to induce plasma VWF binding to glycoprotein (GP) IbIXV on target platelets. VWF-collagen-binding assays, depending on the type of collagen used, can improve the detection of forms of VWD with high molecular weight VWF multimer loss, although the best method is debatable. A number of innovations have been applied to VWF:RCo (which is commonly performed on an aggregometer), including replacing the target platelets with immobilized GPIbα, and quantification by an enzyme-linked immunosorbent assay (ELISA), immunoturbidimetric, or chemiluminescent end-point. Some common polymorphisms in the VWF gene that do not cause bleeding are associated with falsely low VWF activity by ristocetin-dependent methods. To overcome the need for ristocetin, some new VWF activity assays use gain-of-function GPIbα mutants that bind VWF without the need for ristocetin, with an improved precision and lower limit of detection than measuring VWF:RCo by aggregometry. ELISA of VWF binding to mutated GPIbα shows promise as a method to identify gain-of-function defects from type 2B VWD. The performance characteristics of many new VWF activity assays suggest that the detection of VWD, and monitoring of VWD therapy, by clinical laboratories could be improved through adopting newer generation VWF assays. © 2014 John Wiley & Sons Ltd.

  6. Aeromagnetic maps of the Colorado River region including the Kingman, Needles, Salton Sea, and El Centro 1 degree by 2 degrees quadrangles, California, Arizona, and Nevada

    USGS Publications Warehouse

    Mariano, John; Grauch, V.J.

    1988-01-01

    Aeromagnetic anomalies are produced by variations in the strength and direction of the magnetic field of rocks that include magnetic minerals, commonly magnetite. Patterns of anomalies on aeromagnetic maps can reveal structures - for example, faults which have juxtaposed magnetic rocks against non-magnetic rocks, or areas of alteration where magnetic minerals have been destroyed by hydrothermal activity. Tectonic features of regional extent may not become apparent until a number of aeromagnetic surveys have been compiled and plotted at the same scale. Commonly the compilation involves piecing together data from surveys that were flown at different times with widely disparate flight specifications and data reduction procedures. The data may be compiled into a composite map, where all the pieces are plotted onto one map without regard to the difference in flight elevation and datum, or they may be compiled into a merged map, where all survey data are analytically reduced to a common flight elevation and datum, and then digitally merged at the survey boundaries. The composite map retains the original resolution of all the survey data, but computer methods to enhance regional features crossing the survey boundaries may not be applied. On the other hand, computer methods can be applied to the merged data, but the accuracy of the data may be slightly diminished.

  7. A scoping review of spatial cluster analysis techniques for point-event data.

    PubMed

    Fritz, Charles E; Schuurman, Nadine; Robertson, Colin; Lear, Scott

    2013-05-01

    Spatial cluster analysis is a uniquely interdisciplinary endeavour, and so it is important to communicate and disseminate ideas, innovations, best practices and challenges across practitioners, applied epidemiology researchers and spatial statisticians. In this research we conducted a scoping review to systematically search peer-reviewed journal databases for research that has employed spatial cluster analysis methods on individual-level, address location, or x and y coordinate derived data. To illustrate the thematic issues raised by our results, methods were tested using a dataset where known clusters existed. Point pattern methods, spatial clustering and cluster detection tests, and a locally weighted spatial regression model were most commonly used for individual-level, address location data (n = 29). The spatial scan statistic was the most popular method for address location data (n = 19). Six themes were identified relating to the application of spatial cluster analysis methods and subsequent analyses, which we recommend researchers to consider; exploratory analysis, visualization, spatial resolution, aetiology, scale and spatial weights. It is our intention that researchers seeking direction for using spatial cluster analysis methods, consider the caveats and strengths of each approach, but also explore the numerous other methods available for this type of analysis. Applied spatial epidemiology researchers and practitioners should give special consideration to applying multiple tests to a dataset. Future research should focus on developing frameworks for selecting appropriate methods and the corresponding spatial weighting schemes.

  8. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  9. Optimal Partitioning of a Data Set Based on the "p"-Median Model

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2008-01-01

    Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…

  10. Determining the rate of value increase for oaks

    Treesearch

    Paul S. DeBald; Joseph J. Mendel

    1971-01-01

    A method used to develop rate of value increase is described as an aid to management decision-making. Regional rates of value increase and financial maturity diameters for ten species common to the oak-hickory type are outlined, and the economic principles involved are explained to show how they apply to either individual trees or stands.

  11. Classifying and quantifying basins of attraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprott, J. C.; Xiong, Anda

    2015-08-15

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions.

  12. Accuracy of plant specimen disease severity estimates: concepts, history, methods, ramifications and challenges for the future

    USDA-ARS?s Scientific Manuscript database

    Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...

  13. The Potential for Meta-Analysis to Support Decision Analysis in Ecology

    ERIC Educational Resources Information Center

    Mengersen, Kerrie; MacNeil, M. Aaron; Caley, M. Julian

    2015-01-01

    Meta-analysis and decision analysis are underpinned by well-developed methods that are commonly applied to a variety of problems and disciplines. While these two fields have been closely linked in some disciplines such as medicine, comparatively little attention has been paid to the potential benefits of linking them in ecology, despite reasonable…

  14. Efficient heuristics for maximum common substructure search.

    PubMed

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  15. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Music and movement share a dynamic structure that supports universal expressions of emotion

    PubMed Central

    Sievers, Beau; Polansky, Larry; Casey, Michael; Wheatley, Thalia

    2013-01-01

    Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures. PMID:23248314

  17. Identifying common donors in DNA mixtures, with applications to database searches.

    PubMed

    Slooten, K

    2017-01-01

    Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Construction of multi-scale consistent brain networks: methods and applications.

    PubMed

    Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.

  19. When the firm prevents the crash: Avoiding market collapse with partial control.

    PubMed

    Levi, Asaf; Sabuco, Juan; A F Sanjuán, Miguel

    2017-01-01

    Market collapse is one of the most dramatic events in economics. Such a catastrophic event can emerge from the nonlinear interactions between the economic agents at the micro level of the economy. Transient chaos might be a good description of how a collapsing market behaves. In this work, we apply a new control method, the partial control method, with the goal of avoiding this disastrous event. Contrary to common control methods that try to influence the system from the outside, here the market is controlled from the bottom up by one of the most basic components of the market-the firm. This is the first time that the partial control method is applied on a strictly economical system in which we also introduce external disturbances. We show how the firm is capable of controlling the system avoiding the collapse by only adjusting the selling price of the product or the quantity of production in accordance to the market circumstances. Additionally, we demonstrate how a firm with a large market share is capable of influencing the demand achieving price stability across the retail and wholesale markets. Furthermore, we prove that the control applied in both cases is much smaller than the external disturbances.

  20. Observation of twinning in diamond CVD films

    NASA Astrophysics Data System (ADS)

    Marciniak, W.; Fabisiak, K.; Orzeszko, S.; Rozploch, F.

    1992-10-01

    Diamond particles prepared by dc-glow-discharge enhanced HF-CVD hybrid method, from a mixture of acetone vapor and hydrogen gas have been examined by TEM, RHEED and dark field method of observation. Results suggest the presence of twinned diamond particles, which can be reconstructed by a sequence of twinning operations. Contrary to the 'stick model' of the lattice, very common five-fold symmetry of diamond microcrystals may be obtained by applying a number of edge dislocations rather than the continuous deformation of many tetrahedral C-C bonds.

  1. Effect of Heat on Wounded Warriors in Ground Combat Vehicles: Insights from the Army Medical Community, and the Simulation of a Novel Method for Soldier Thermal Control

    DTIC Science & Technology

    2012-08-01

    soldiers via microclimate cooling [13]. Unfortunately, a common method for direct cooling of the soldiers – surface cooling – can cause cutaneous...Intermittent, Regional Microclimate Cooling," Journal of Applied Physiology, vol. 94, pp. 1841-48, 2003. [18] L. A. Stephenson, C. R. Vernieuw, W...Leammukda and M. A. Kolka, "Skin Temperature Feedback Optimizes Microclimate Cooling," Aviation, Space and Environmental Medicine, vol. 78, pp. 377-382

  2. Development and First Results of the Width-Tapered Beam Method for Adhesion Testing of Photovoltaic Material Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Tracy, Jared; Dauskardt, Reinhold

    2016-11-21

    A fracture mechanics based approach for quantifying adhesion at every interface within the PV module laminate is presented. The common requirements of monitoring crack length and specimen compliance are circumvented through development of a width-tapered cantilever beam method. This technique may be applied at both the module and coupon level to yield a similar, quantitative, measurement. Details of module and sample preparation are described and first results on field-exposed modules deployed for over 27 years presented.

  3. Satellite Formation Control Using Atmospheric Drag

    DTIC Science & Technology

    2007-03-01

    of the formation. The linearized Clohessy - Wiltshire equations of motion are used to describe the motion of the two-satellite formation about an empty...control methods were applied to both the linear and nonlinear forms of the Clohessy - Wiltshire equations, and the performance of each control method was...r0δθ̈ = −2nδṙ + fθ (2.16) δz̈ = −n2δz + fz (2.17) These three equations are commonly known as Hill’s equations or the Clohessy - Wiltshire (CW

  4. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  5. A Calibrated Method of Massage Therapy Decreases Systolic Blood Pressure Concomitant With Changes in Heart Rate Variability in Male Rats.

    PubMed

    Spurgin, Kurt A; Kaprelian, Anthony; Gutierrez, Roberto; Jha, Vidyasagar; Wilson, Christopher G; Dobyns, Abigail; Xu, Karen H; Curras-Collazo, Margarita C

    2017-02-01

    The purpose of this study was to develop a method for applying calibrated manual massage pressures by using commonly available, inexpensive sphygmomanometer parts and validate the use of this approach as a quantitative method of applying massage therapy to rodents. Massage pressures were monitored by using a modified neonatal blood pressure (BP) cuff attached to an aneroid gauge. Lightly anesthetized rats were stroked on the ventral abdomen for 5 minutes at pressures of 20 mm Hg and 40 mm Hg. Blood pressure was monitored noninvasively for 20 minutes following massage therapy at 5-minute intervals. Interexaminer reliability was assessed by applying 20 mm Hg and 40 mm Hg pressures to a digital scale in the presence or absence of the pressure gauge. With the use of this method, we observed good interexaminer reliability, with intraclass coefficients of 0.989 versus 0.624 in blinded controls. In Long-Evans rats, systolic BP dropped by an average of 9.86% ± 0.27% following application of 40 mm Hg massage pressure. Similar effects were seen following 20 mm Hg pressure (6.52% ± 1.7%), although latency to effect was greater than at 40 mm Hg. Sprague-Dawley rats behaved similarly to Long-Evans rats. Low-frequency/high-frequency ratio, a widely-used index of autonomic tone in cardiovascular regulation, showed a significant increase within 5 minutes after 40 mm Hg massage pressure was applied. The calibrated massage method was shown to be a reproducible method for applying massage pressures in rodents and lowering BP. Copyright © 2016. Published by Elsevier Inc.

  6. Improving Psychological Measurement: Does It Make a Difference? A Comment on Nesselroade and Molenaar (2016).

    PubMed

    Maydeu-Olivares, Alberto

    2016-01-01

    Nesselroade and Molenaar advocate the use of an idiographic filter approach. This is a fixed-effects approach, which may limit the number of individuals that can be simultaneously modeled, and it is not clear how to model the presence of subpopulations. Most important, Nesselroade and Molenaar's proposal appears to be best suited for modeling long time series on a few variables for a few individuals. Long time series are not common in psychological applications. Can it be applied to the usual longitudinal data we face? These are characterized by short time series (four to five points in time), hundreds of individuals, and dozens of variables. If so, what do we gain? Applied settings most often involve between-individual decisions. I conjecture that their approach will not outperform common, simpler, methods. However, when intraindividual decisions are involved, their approach may have an edge.

  7. Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals

    DOEpatents

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2014-10-14

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.

  8. Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals

    DOEpatents

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2014-05-20

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.

  9. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  10. Iterative methods for plasma sheath calculations: Application to spherical probe

    NASA Technical Reports Server (NTRS)

    Parker, L. W.; Sullivan, E. C.

    1973-01-01

    The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.

  11. How long does it take to boil an egg? A simple approach to the energy transfer equation

    NASA Astrophysics Data System (ADS)

    Roura, P.; Fort, J.; Saurina, J.

    2000-01-01

    The heating of simple geometric objects immersed in an isothermal bath is analysed qualitatively through Fourier's law. The approximate temperature evolution is compared with the exact solution obtained by solving the transport differential equation, the discrepancies being smaller than 20%. Our method succeeds in giving the solution as a function of the Fourier modulus so that the scale laws hold. It is shown that the time needed to homogenize temperature variations that extend over mean distances xm is approximately xm2/icons/Journals/Common/alpha" ALT="alpha" ALIGN="MIDDLE"/>, where icons/Journals/Common/alpha" ALT="alpha" ALIGN="MIDDLE"/> is the thermal diffusivity. This general relationship also applies to atomic diffusion. Within the approach presented there is no need to write down any differential equation. As an example, the analysis is applied to the process of boiling an egg.

  12. An assessment of two methods for identifying undocumented levees using remotely sensed data

    USGS Publications Warehouse

    Czuba, Christiana R.; Williams, Byron K.; Westman, Jack; LeClaire, Keith

    2015-01-01

    Many undocumented and commonly unmaintained levees exist in the landscape complicating flood forecasting, risk management, and emergency response. This report describes a pilot study completed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers to assess two methods to identify undocumented levees by using remotely sensed, high-resolution topographic data. For the first method, the U.S. Army Corps of Engineers examined hillshades computed from a digital elevation model that was derived from light detection and ranging (lidar) to visually identify potential levees and then used detailed site visits to assess the validity of the identifications. For the second method, the U.S. Geological Survey applied a wavelet transform to a lidar-derived digital elevation model to identify potential levees. The hillshade method was applied to Delano, Minnesota, and the wavelet-transform method was applied to Delano and Springfield, Minnesota. Both methods were successful in identifying levees but also identified other features that required interpretation to differentiate from levees such as constructed barriers, high banks, and bluffs. Both methods are complementary to each other, and a potential conjunctive method for testing in the future includes (1) use of the wavelet-transform method to rapidly identify slope-break features in high-resolution topographic data, (2) further examination of topographic data using hillshades and aerial photographs to classify features and map potential levees, and (3) a verification check of each identified potential levee with local officials and field visits.

  13. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  14. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    PubMed

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  15. Extractive-spectrophotometric determination of disopyramide and irbesartan in their pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.

    2007-04-01

    Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.

  16. Peptide and protein quantitation by acid-catalyzed 18O-labeling of carboxyl groups.

    PubMed

    Haaf, Erik; Schlosser, Andreas

    2012-01-03

    We have developed a new method that applies acidic catalysis with hydrochloric acid for (18)O-labeling of peptides at their carboxyl groups. With this method, peptides get labeled at their C-terminus, at Asp and Glu residues, and at carboxymethylated cysteine residues. Oxygen atoms at phosphate groups of phosphopeptide are not exchanged. Our elaborated labeling protocol is easy to perform, fast (5 h and 30 min), and results in 95-97 atom % incorporation of (18)O at carboxyl groups. Undesired side reactions, such as deamidation or peptide hydrolysis, occur only at a very low level under the conditions applied. In addition, data analysis can be performed automatically using common software tools, such as Mascot Distiller. We have demonstrated the capability of this method for the quantitation of peptides as well as for phosphopeptides. © 2011 American Chemical Society

  17. Experimental spinal cord trauma: a review of mechanically induced spinal cord injury in rat models.

    PubMed

    Abdullahi, Dauda; Annuar, Azlina Ahmad; Mohamad, Masro; Aziz, Izzuddin; Sanusi, Junedah

    2017-01-01

    It has been shown that animal spinal cord compression (using methods such as clips, balloons, spinal cord strapping, or calibrated forceps) mimics the persistent spinal canal occlusion that is common in human spinal cord injury (SCI). These methods can be used to investigate the effects of compression or to know the optimal timing of decompression (as duration of compression can affect the outcome of pathology) in acute SCI. Compression models involve prolonged cord compression and are distinct from contusion models, which apply only transient force to inflict an acute injury to the spinal cord. While the use of forceps to compress the spinal cord is a common choice due to it being inexpensive, it has not been critically assessed against the other methods to determine whether it is the best method to use. To date, there is no available review specifically focused on the current compression methods of inducing SCI in rats; thus, we performed a systematic and comprehensive publication search to identify studies on experimental spinalization in rat models, and this review discusses the advantages and limitations of each method.

  18. Identifying effective connectivity parameters in simulated fMRI: a direct comparison of switching linear dynamic system, stochastic dynamic causal, and multivariate autoregressive models

    PubMed Central

    Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry

    2013-01-01

    The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258

  19. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  20. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  1. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.

    PubMed

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.

  2. Ultrasonic NDE Simulation for Composite Manufacturing Defects

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.

    2016-01-01

    The increased use of composites in aerospace components is expected to continue into the future. The large scale use of composites in aerospace necessitates the development of composite-appropriate nondestructive evaluation (NDE) methods to quantitatively characterize defects in as-manufactured parts and damage incurred during or post manufacturing. Ultrasonic techniques are one of the most common approaches for defect/damage detection in composite materials. One key technical challenge area included in NASA's Advanced Composite's Project is to develop optimized rapid inspection methods for composite materials. Common manufacturing defects in carbon fiber reinforced polymer (CFRP) composites include fiber waviness (in-plane and out-of-plane), porosity, and disbonds; among others. This paper is an overview of ongoing work to develop ultrasonic wavefield based methods for characterizing manufacturing waviness defects. The paper describes the development and implementation of a custom ultrasound simulation tool that is used to model ultrasonic wave interaction with in-plane fiber waviness (also known as marcelling). Wavefield data processing methods are applied to the simulation data to explore possible routes for quantitative defect characterization.

  3. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  4. Application of fuzzy AHP method to IOCG prospectivity mapping: A case study in Taherabad prospecting area, eastern Iran

    NASA Astrophysics Data System (ADS)

    Najafi, Ali; Karimpour, Mohammad Hassan; Ghaderi, Majid

    2014-12-01

    Using fuzzy analytical hierarchy process (AHP) technique, we propose a method for mineral prospectivity mapping (MPM) which is commonly used for exploration of mineral deposits. The fuzzy AHP is a popular technique which has been applied for multi-criteria decision-making (MCDM) problems. In this paper we used fuzzy AHP and geospatial information system (GIS) to generate prospectivity model for Iron Oxide Copper-Gold (IOCG) mineralization on the basis of its conceptual model and geo-evidence layers derived from geological, geochemical, and geophysical data in Taherabad area, eastern Iran. The FuzzyAHP was used to determine the weights belonging to each criterion. Three geoscientists knowledge on exploration of IOCG-type mineralization have been applied to assign weights to evidence layers in fuzzy AHP MPM approach. After assigning normalized weights to all evidential layers, fuzzy operator was applied to integrate weighted evidence layers. Finally for evaluating the ability of the applied approach to delineate reliable target areas, locations of known mineral deposits in the study area were used. The results demonstrate the acceptable outcomes for IOCG exploration.

  5. [Cost of therapy for neurodegenerative diseases. Applying an activity-based costing system].

    PubMed

    Sánchez-Rebull, María-Victoria; Terceño Gómez, Antonio; Travé Bautista, Angeles

    2013-01-01

    To apply the activity based costing (ABC) model to calculate the cost of therapy for neurodegenerative disorders in order to improve hospital management and allocate resources more efficiently. We used the case study method in the Francolí long-term care day center. We applied all phases of an ABC system to quantify the cost of the activities developed in the center. We identified 60 activities; the information was collected in June 2009. The ABC system allowed us to calculate the average cost per patient with respect to the therapies received. The most costly and commonly applied technique was psycho-stimulation therapy. Focusing on this therapy and on others related to the admissions process could lead to significant cost savings. ABC costing is a viable method for costing activities and therapies in long-term day care centers because it can be adapted to their structure and standard practice. This type of costing allows the costs of each activity and therapy, or combination of therapies, to be determined and aids measures to improve management. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.

  6. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  7. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  8. Program scheme using common source lines in channel stacked NAND flash memory with layer selection by multilevel operation

    NASA Astrophysics Data System (ADS)

    Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook

    2018-02-01

    To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.

  9. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  10. Propagation of sound in turbulent media

    NASA Technical Reports Server (NTRS)

    Wenzel, A. R.

    1976-01-01

    Perturbation methods commonly used to study the propagation of acoustic waves in turbulent media are reviewed. Emphasis is on those techniques which are applicable to problems involving long-range propagation in the atmosphere and ocean. Characteristic features of the various methods are illustrated by applying them to particular problems. It is shown that conventional perturbation techniques, such as the Born approximation, yield solutions which contain secular terms, and which therefore have a relatively limited range of validity. In contrast, it is found that solutions obtained with the aid of the Rytov method or the smoothing method do not contain secular terms, and consequently have a much greater range of validity.

  11. Evaluation of methods for managing censored results when calculating the geometric mean.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2018-01-01

    Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Exploiting Language Models to Classify Events from Twitter

    PubMed Central

    Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young

    2015-01-01

    Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139

  13. Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns

    PubMed Central

    Ribeiro, Haroldo V.; Zunino, Luciano; Lenzi, Ervin K.; Santoro, Perseu A.; Mendes, Renio S.

    2012-01-01

    Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to fractal landscapes generated numerically where we compare our measures with the Hurst exponent; liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; and Ising surfaces where our method identified the critical temperature and also proved to be stable. PMID:22916097

  14. High-Temperature Thermal Conductivity Measurement Apparatus Based on Guarded Hot Plate Method

    NASA Astrophysics Data System (ADS)

    Turzo-Andras, E.; Magyarlaki, T.

    2017-10-01

    An alternative calibration procedure has been applied using apparatus built in-house, created to optimize thermal conductivity measurements. The new approach compared to those of usual measurement procedures of thermal conductivity by guarded hot plate (GHP) consists of modified design of the apparatus, modified position of the temperature sensors and new conception in the calculation method, applying the temperature at the inlet section of the specimen instead of the temperature difference across the specimen. This alternative technique is suitable for eliminating the effect of thermal contact resistance arising between a rigid specimen and the heated plate, as well as accurate determination of the specimen temperature and of the heat loss at the lateral edge of the specimen. This paper presents an overview of the specific characteristics of the newly developed "high-temperature thermal conductivity measurement apparatus" based on the GHP method, as well as how the major difficulties are handled in the case of this apparatus, as compared to the common GHP method that conforms to current international standards.

  15. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  16. Sampling-based ensemble segmentation against inter-operator variability

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew

    2011-03-01

    Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).

  17. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  18. Fast ADC based multichannel acquisition system for the GEM detector

    NASA Astrophysics Data System (ADS)

    Kasprowicz, G.; Czarski, T.; Chernyshova, M.; Dominik, W.; Jakubowska, K.; Karpinski, L.; Kierzkowski, K.; Pozniak, K.; Rzadkiewicz, J.; Scholz, M.; Zabolotny, W.

    2012-05-01

    A novel approach to the Gas Electron Multiplier1 (GEM) detector readout is presented. Unlike commonly used methods, based on discriminators, and analogue FIFOs,[ the method developed uses simultaneously sampling high speed ADCs and advanced FPGA-based processing logic to estimate the energy of every single photon. Such method is applied to every GEM strip signal. It is especially useful in case of crystal-based spectrometers for soft X-rays, where higher order reflections need to be identified and rejected. For the purpose of the detector readout, a novel conception of the measurement platform was developed.

  19. [Studies on HPLC fingerprint chromatogram of Folium Fici Microcarpa].

    PubMed

    Fang, Zhi-Jian; Dai, Zhen; Li, Shu-Yuan

    2008-10-01

    To establish a sensitive and specific method for quality control of Folium Fici Microcarpa, HPLC method was applied for studies on the fingerprint chromatogram of Folium Fici Microcarpa. Isovitexin was used as reference substance to evaluate the chromatogram of 10 samples from different regions and 12 samples collected in different months. The result revealed that all the chromatographic peaks were seperated efficiently. There were 17 common peaks showed in the fingerprint chromatogram. The method of fingerprint chromatogram with characteristic and specificity will be used to identify the quality and evaluate different origins and collection period of Folium Fici Microcarpa.

  20. The absolute magnitudes of RR Lyrae stars. II - DX Delphini

    NASA Astrophysics Data System (ADS)

    Skillen, I.; Fernley, J. A.; Jameson, R. F.; Lynas-Gray, A. E.; Longmore, A. J.

    1989-11-01

    UV, IR and visual photometry of the short-period RR Lyrae star DX Del is presented and treated by means of the Blackwell and Shallis (1977) IR Flux Method-based formulation of the Baade-Wesselink method. Upon correcting to common reddening, extinction, and radial-velocity conversion factors, as well as applying the Baade-Wesselink analysis of Burki and Meylan (1986), it proved impossible to reproduce their results. It is suggested that the present methods are inherently more stable than those of Burki and Meylan, given their reliance on optical colors and magnitudes to derive effective temperatures and radii.

  1. A comparative review of optical surface contamination assessment techniques

    NASA Technical Reports Server (NTRS)

    Heaney, James B.

    1987-01-01

    This paper will review the relative sensitivities and practicalities of the common surface analytical methods that are used to detect and identify unwelcome adsorbants on optical surfaces. The compared methods include visual inspection, simple reflectometry and transmissiometry, ellipsometry, infrared absorption and attenuated total reflectance spectroscopy (ATR), Auger electron spectroscopy (AES), scanning electron microscopy (SEM), secondary ion mass spectrometry (SIMS), and mass accretion determined by quartz crystal microbalance (QCM). The discussion is biased toward those methods that apply optical thin film analytical techniques to spacecraft optical contamination problems. Examples are cited from both ground based and in-orbit experiments.

  2. A carbon tetrachloride-free synthesis of N-phenyltrifluoroacetimidoyl chloride.

    PubMed

    Smith, Dylan G M; Williams, Spencer J

    2017-10-10

    N-Phenyltrifluoroacetimidoyl chloride (PTFAI-Cl) is a reagent widely used for the preparation of glycosyl N-phenyltrifluoroacetimidates. However, the most commonly applied method requires carbon tetrachloride, a hepatotoxic reagent that has been phased out under the Montreal Protocol. We report a new synthesis of N-phenyltrifluoroacetimidoyl chloride (PTFAI-Cl) using dichlorotriphenylphosphane and triethylamine. Copyright © 2017. Published by Elsevier Ltd.

  3. Confidence intervals for predicting lumber strength properties based on ratios of percentiles from two Weibull populations.

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    2003-01-01

    Ratios of strength properties of lumber are commonly used to calculate property values for standards. Although originally proposed in terms of means, ratios are being applied without regard to position in the distribution. It is now known that lumber strength properties are generally not normally distributed. Therefore, nonparametric methods are often used to derive...

  4. Improving the Quality of School Facilities through Building Performance Assessment: Educational Reform and School Building Quality in Sao Paulo, Brazil

    ERIC Educational Resources Information Center

    Ornstein, Sheila Walbe; Moreira, Nanci Saraiva; Ono, Rosaria; Limongi Franca, Ana J. G.; Nogueira, Roselene A. M. F.

    2009-01-01

    Purpose: The paper describes the purpose of and strategies for conducting post-occupancy evaluations (POEs) as a method for assessing school building performance. Set within the larger context of global efforts to develop and apply common indicators of school building quality, the authors describe research conducted within the newest generation of…

  5. Rodent repellent studies. IV. Preparation and properties of trinitrobenzene-aryl amine complexes

    USGS Publications Warehouse

    DeWitt, J.B.; Bellack, E.; Welch, J.F.

    1953-01-01

    Data are presented on methods of preparation, chemical arid physical characteristics, toxicity, and repellency to rodents of complexes of symmetrical trinitrohenzene with various aromatic amines: When applied in suitable carriers or incorporated in plastic .films, members of this series ofmaterials were shown to offer significant increases in time required by wild rodents to damage common packaging materials.

  6. Digital fringe projection for hand surface coordinate variation analysis caused by osteoarthritis

    NASA Astrophysics Data System (ADS)

    Nor Haimi, Wan Mokhdzani Wan; Hau Tan, Cheek; Retnasamy, Vithyacharan; Vairavan, Rajendaran; Sauli, Zaliman; Roshidah Yusof, Nor; Hambali, Nor Azura Malini Ahmad; Aziz, Muhammad Hafiz Ab; Bakhit, Ahmad Syahir Ahmad

    2017-11-01

    Hand osteoarthritis is one of the most common forms of arthritis which impact millions of people worldwide. The disabling problem occurs when the protective cartilage on the boundaries of bones wear off over time. Currently, in order to identify hand osteoarthritis, special instruments namely X-ray scanning and MRI are used for the detection but it also has its limitations such as radiation exposure and can be quite costly. In this work, an optical metrology system based on digital fringe projection which comprises of an LCD projector, CCD camera and a personal computer has been developed to anticipate abnormal growth or deformation on the joints of the hand which are common symptoms of osteoarthritis. The main concept of this optical metrology system is to apply structured light as imaging source for surface change detection. The imaging source utilizes fringe patterns generated by C++ programming and is shifted by 3 phase shifts based on the 3 steps 2 shifts method. Phase wrapping technique and analysis were applied in order to detect the deformation of live subjects. The result has demonstrated a successful method of hand deformation detection based on the pixel tracking differences of a normal and deformed state.

  7. Extractive colorimetric method for the determination of dothiepin hydrochloride and risperidone in pure and in dosage forms.

    PubMed

    Hassan, Wafaa El-Sayed

    2008-08-01

    Three rapid, simple, reproducible and sensitive extractive colorimetric methods (A--C) for assaying dothiepin hydrochloride (I) and risperidone (II) in bulk sample and in dosage forms were investigated. Methods A and B are based on the formation of an ion pair complexes with methyl orange (A) and orange G (B), whereas method C depends on ternary complex formation between cobalt thiocyanate and the studied drug I or II. The optimum reaction conditions were investigated and it was observed the calibration curves resulting from the measurements of absorbance concentration relations of the extracted complexes were linear over the concentration range 0.1--12 microg ml(-1) for method A, 0.5--11 mug ml(-1) for method B, and 3.2--80 microg ml(-1) for method C with a relative standard deviation (RSD) of 1.17 and 1.28 for drug I and II, respectively. The molar absorptivity, Sandell sensitivity, Ringbom optimum concentration ranges, and detection and quantification limits for all complexes were calculated and evaluated at maximum wavelengths of 423, 498, and 625 nm, using methods A, B, and C, respectively. The interference from excipients commonly present in dosage forms and common degradation products was studied. The proposed methods are highly specific for the determination of drugs I and II, in their dosage forms applying the standard additions technique without any interference from common excipients. The proposed methods have been compared statistically to the reference methods and found to be simple, accurate (t-test) and reproducible (F-value).

  8. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  9. Virtual screening of cocrystal formers for CL-20

    NASA Astrophysics Data System (ADS)

    Zhou, Jun-Hong; Chen, Min-Bo; Chen, Wei-Ming; Shi, Liang-Wei; Zhang, Chao-Yang; Li, Hong-Zhen

    2014-08-01

    According to the structure characteristics of 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20) and the kinetic mechanism of the cocrystal formation, the method of virtual screening CL-20 cocrystal formers by the criterion of the strongest intermolecular site pairing energy (ISPE) was proposed. In this method the strongest ISPE was thought to determine the first step of the cocrystal formation. The prediction results for four sets of common drug molecule cocrystals by this method were compared with those by the total ISPE method from the reference (Musumeci et al., 2011), and the experimental results. This method was then applied to virtually screen the CL-20 cocrystal formers, and the prediction results were compared with the experimental results.

  10. [Applications of habitat equivalency analysis in ecological damage assessment of oil spill incident].

    PubMed

    Yang, Yin; Han, Da-xiong; Wang, Hai-yan

    2011-08-01

    Habitat equivalency analysis (HEA) is one of the methods commonly used by U.S. National Oceanic and Atmospheric Administration in natural resources damage assessment, but rarely applied in China. Based on the theory of HEA and the assessment practices of domestic oil spill incidents, a modification on the HEA was made in this paper, and applied to calculate the habitat value in oil spill incidents. According to the data collected from an oil spill incident in China, the modified HEA was applied in a case study to scale the compensatory-restoration. By introducing the ecological service equivalent factor to transfer various habitats, it was achieved to value of the injured habitats in ecological damage assessment of oil spill incident.

  11. FDTD-based Transcranial Magnetic Stimulation model applied to specific neurodegenerative disorders.

    PubMed

    Fanjul-Vélez, Félix; Salas-García, Irene; Ortega-Quijano, Noé; Arce-Diego, José Luis

    2015-01-01

    Non-invasive treatment of neurodegenerative diseases is particularly challenging in Western countries, where the population age is increasing. In this work, magnetic propagation in human head is modelled by Finite-Difference Time-Domain (FDTD) method, taking into account specific characteristics of Transcranial Magnetic Stimulation (TMS) in neurodegenerative diseases. It uses a realistic high-resolution three-dimensional human head mesh. The numerical method is applied to the analysis of magnetic radiation distribution in the brain using two realistic magnetic source models: a circular coil and a figure-8 coil commonly employed in TMS. The complete model was applied to the study of magnetic stimulation in Alzheimer and Parkinson Diseases (AD, PD). The results show the electrical field distribution when magnetic stimulation is supplied to those brain areas of specific interest for each particular disease. Thereby the current approach entails a high potential for the establishment of the current underdeveloped TMS dosimetry in its emerging application to AD and PD. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Effects of common seagrass restoration methods on ecosystem structure in subtropical seagrass meadows.

    PubMed

    Bourque, Amanda S; Fourqurean, James W

    2014-06-01

    Seagrass meadows near population centers are subject to frequent disturbance from vessel groundings. Common seagrass restoration methods include filling excavations and applying fertilizer to encourage seagrass recruitment. We sampled macrophytes, soil structure, and macroinvertebrate infauna at unrestored and recently restored vessel grounding disturbances to evaluate the effects of these restoration methods on seagrass ecosystem structure. After a year of observations comparing filled sites to both undisturbed reference and unrestored disturbed sites, filled sites had low organic matter content, nutrient pools, and primary producer abundance. Adding a nutrient source increased porewater nutrient pools at disturbed sites and in undisturbed meadows, but not at filled sites. Environmental predictors of infaunal community structure across treatments included soil texture and nutrient pools. At the one year time scale, the restoration methods studied did not result in convergence between restored and unrestored sites. Particularly in filled sites, soil conditions may combine to constrain rapid development of the seagrass community and associated infauna. Our study is important for understanding early recovery trajectories following restoration using these methods. Published by Elsevier Ltd.

  13. Laser tracker orientation in confined space using on-board targets

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui

    2016-08-01

    This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.

  14. Mapping the Regional Influence of Genetics on Brain Structure Variability - A Tensor-Based Morphometry Study

    PubMed Central

    Brun, Caroline; Leporé, Natasha; Pennec, Xavier; Lee, Agatha D.; Barysheva, Marina; Madsen, Sarah K.; Avedissian, Christina; Chou, Yi-Yu; de Zubicaray, Greig I.; McMahon, Katie; Wright, Margaret; Toga, Arthur W.; Thompson, Paul M.

    2010-01-01

    Genetic and environmental factors influence brain structure and function profoundly The search for heritable anatomical features and their influencing genes would be accelerated with detailed 3D maps showing the degree to which brain morphometry is genetically determined. As part of an MRI study that will scan 1150 twins, we applied Tensor-Based Morphometry to compute morphometric differences in 23 pairs of identical twins and 23 pairs of same-sex fraternal twins (mean age: 23.8 ± 1.8 SD years). All 92 twins’ 3D brain MRI scans were nonlinearly registered to a common space using a Riemannian fluid-based warping approach to compute volumetric differences across subjects. A multi-template method was used to improve volume quantification. Vector fields driving each subject’s anatomy onto the common template were analyzed to create maps of local volumetric excesses and deficits relative to the standard template. Using a new structural equation modeling method, we computed the voxelwise proportion of variance in volumes attributable to additive (A) or dominant (D) genetic factors versus shared environmental (C) or unique environmental factors (E). The method was also applied to various anatomical regions of interest (ROIs). As hypothesized, the overall volumes of the brain, basal ganglia, thalamus, and each lobe were under strong genetic control; local white matter volumes were mostly controlled by common environment. After adjusting for individual differences in overall brain scale, genetic influences were still relatively high in the corpus callosum and in early-maturing brain regions such as the occipital lobes, while environmental influences were greater in frontal brain regions which have a more protracted maturational time-course. PMID:19446645

  15. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  16. Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis.

    PubMed

    Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani

    2014-02-15

    Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Neonatal Atlas Construction Using Sparse Representation

    PubMed Central

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883

  18. eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration

    PubMed Central

    Wu, Guorong; Peng, Xuewei; Ying, Shihui; Wang, Qian; Yap, Pew-Thian; Shen, Dan; Shen, Dinggang

    2016-01-01

    Effective and efficient spatial normalization of a large population of brain images is critical for many clinical and research studies, but it is technically very challenging. A commonly used approach is to choose a certain image as the template and then align all other images in the population to this template by applying pairwise registration. To avoid the potential bias induced by the inappropriate template selection, groupwise registration methods have been proposed to simultaneously register all images to a latent common space. However, current groupwise registration methods do not make full use of image distribution information for more accurate registration. In this paper, we present a novel groupwise registration method that harnesses the image distribution information by capturing the image distribution manifold using a hierarchical graph with its nodes representing the individual images. More specifically, a low-level graph describes the image distribution in each subgroup, and a high-level graph encodes the relationship between representative images of subgroups. Given the graph representation, we can register all images to the common space by dynamically shrinking the graph on the image manifold. The topology of the entire image distribution is always maintained during graph shrinkage. Evaluations on two datasets, one for 80 elderly individuals and one for 285 infants, indicate that our method can yield promising results. PMID:26800361

  19. Alternative Test Methods for Electronic Parts

    NASA Technical Reports Server (NTRS)

    Plante, Jeannette

    2004-01-01

    It is common practice within NASA to test electronic parts at the manufacturing lot level to demonstrate, statistically, that parts from the lot tested will not fail in service using generic application conditions. The test methods and the generic application conditions used have been developed over the years through cooperation between NASA, DoD, and industry in order to establish a common set of standard practices. These common practices, found in MIL-STD-883, MIL-STD-750, military part specifications, EEE-INST-002, and other guidelines are preferred because they are considered to be effective and repeatable and their results are usually straightforward to interpret. These practices can sometimes be unavailable to some NASA projects due to special application conditions that must be addressed, such as schedule constraints, cost constraints, logistical constraints, or advances in the technology that make the historical standards an inappropriate choice for establishing part performance and reliability. Alternate methods have begun to emerge and to be used by NASA programs to test parts individually or as part of a system, especially when standard lot tests cannot be applied. Four alternate screening methods will be discussed in this paper: Highly accelerated life test (HALT), forward voltage drop tests for evaluating wire-bond integrity, burn-in options during or after highly accelerated stress test (HAST), and board-level qualification.

  20. Fast Low-Rank Shared Dictionary Learning for Image Classification.

    PubMed

    Tiep Huu Vu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.

  1. Fast Low-Rank Shared Dictionary Learning for Image Classification

    NASA Astrophysics Data System (ADS)

    Vu, Tiep Huu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e. claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Further, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image datasets establish the advantages of our method over state-of-the-art dictionary learning methods.

  2. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  4. Using Grounded Theory Method to Capture and Analyze Health Care Experiences

    PubMed Central

    Foley, Geraldine; Timonen, Virpi

    2015-01-01

    Objective Grounded theory (GT) is an established qualitative research method, but few papers have encapsulated the benefits, limits, and basic tenets of doing GT research on user and provider experiences of health care services. GT can be used to guide the entire study method, or it can be applied at the data analysis stage only. Methods We summarize key components of GT and common GT procedures used by qualitative researchers in health care research. We draw on our experience of conducting a GT study on amyotrophic lateral sclerosis patients’ experiences of health care services. Findings We discuss why some approaches in GT research may work better than others, particularly when the focus of study is hard-to-reach population groups. We highlight the flexibility of procedures in GT to build theory about how people engage with health care services. Conclusion GT enables researchers to capture and understand health care experiences. GT methods are particularly valuable when the topic of interest has not previously been studied. GT can be applied to bring structure and rigor to the analysis of qualitative data. PMID:25523315

  5. Using spatial capture–recapture to elucidate population processes and space-use in herpetological studies

    USGS Publications Warehouse

    Muñoz, David J.; Miller, David A.W.; Sutherland, Chris; Grant, Evan H. Campbell

    2016-01-01

    The cryptic behavior and ecology of herpetofauna make estimating the impacts of environmental change on demography difficult; yet, the ability to measure demographic relationships is essential for elucidating mechanisms leading to the population declines reported for herpetofauna worldwide. Recently developed spatial capture–recapture (SCR) methods are well suited to standard herpetofauna monitoring approaches. Individually identifying animals and their locations allows accurate estimates of population densities and survival. Spatial capture–recapture methods also allow estimation of parameters describing space-use and movement, which generally are expensive or difficult to obtain using other methods. In this paper, we discuss the basic components of SCR models, the available software for conducting analyses, and the experimental designs based on common herpetological survey methods. We then apply SCR models to Red-backed Salamander (Plethodon cinereus), to determine differences in density, survival, dispersal, and space-use between adult male and female salamanders. By highlighting the capabilities of SCR, and its advantages compared to traditional methods, we hope to give herpetologists the resource they need to apply SCR in their own systems.

  6. Covariate Selection for Multilevel Models with Missing Data

    PubMed Central

    Marino, Miguel; Buxton, Orfeu M.; Li, Yi

    2017-01-01

    Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457

  7. Rapid and sensitive analytical method for monitoring of 12 organotin compounds in natural waters.

    PubMed

    Vahčič, Mitja; Milačič, Radmila; Sčančar, Janez

    2011-03-01

    A rapid analytical method for the simultaneous determination of 12 different organotin compounds (OTC): methyl-, butyl-, phenyl- and octyl-tins in natural water samples was developed. It comprises of in situ derivatisation (by using NaBEt4) of OTC in salty or fresh water sample matrix adjusted to pH 6 with Tris-citrate buffer, extraction of ethylated OTC into hexane, separation of OTC in organic phase on 15 m GC column and subsequent quantitative determination of separated OTC by ICP-MS. To optimise the pH of ethylation, phosphate, carbonate and Tris-citrate buffer were investigated alternatively to commonly applied sodium acetate - acetic acid buffer. The ethylation yields in Tris-citrate buffer were found to be better for TBT, MOcT and DOcT in comparison to commonly used acetate buffer. Iso-octane and hexane were examined as organic phase for extraction of ethylated OTC. The advantage of hexane was in its ability for quantitative determination of TMeT. GC column of 15 m in length was used for separation of studied OTC under the optimised separation conditions and its performances compared to 30 m column. The analytical method developed enables sensitive simultaneous determination of 12 different OTC and appreciably shortened analysis time in larger series of water samples. LOD's obtained for the newly developed method ranged from 0.05-0.06 ng Sn L-1 for methyl-, 0.11-0.45 ng Sn L-1 for butyl-, 0.11-0.16 ng Sn L-1 for phenyl-, and 0.07-0.10 ng Sn L-1 for octyl-tins. By applying the developed analytical method, marine water samples from the Northern Adriatic Sea containing mainly butyl- and methyl-tin species were analysed to confirm the proposed method's applicability.

  8. Refraction traveltime tomography based on damped wave equation for irregular topographic model

    NASA Astrophysics Data System (ADS)

    Park, Yunhui; Pyun, Sukjoon

    2018-03-01

    Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.

  9. High-resolution chromatography/time-of-flight MSE with in silico data mining is an information-rich approach to reactive metabolite screening.

    PubMed

    Barbara, Joanna E; Castro-Perez, Jose M

    2011-10-30

    Electrophilic reactive metabolite screening by liquid chromatography/mass spectrometry (LC/MS) is commonly performed during drug discovery and early-stage drug development. Accurate mass spectrometry has excellent utility in this application, but sophisticated data processing strategies are essential to extract useful information. Herein, a unified approach to glutathione (GSH) trapped reactive metabolite screening with high-resolution LC/TOF MS(E) analysis and drug-conjugate-specific in silico data processing was applied to rapid analysis of test compounds without the need for stable- or radio-isotope-labeled trapping agents. Accurate mass defect filtering (MDF) with a C-heteroatom dealkylation algorithm dynamic with mass range was compared to linear MDF and shown to minimize false positive results. MS(E) data-filtering, time-alignment and data mining post-acquisition enabled detection of 53 GSH conjugates overall formed from 5 drugs. Automated comparison of sample and control data in conjunction with the mass defect filter enabled detection of several conjugates that were not evident with mass defect filtering alone. High- and low-energy MS(E) data were time-aligned to generate in silico product ion spectra which were successfully applied to structural elucidation of detected GSH conjugates. Pseudo neutral loss and precursor ion chromatograms derived post-acquisition demonstrated 50.9% potential coverage, at best, of the detected conjugates by any individual precursor or neutral loss scan type. In contrast with commonly applied neutral loss and precursor-based techniques, the unified method has the advantage of applicability across different classes of GSH conjugates. The unified method was also successfully applied to cyanide trapping analysis and has potential for application to alternate trapping agents. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Automatic peak selection by a Benjamini-Hochberg-based algorithm.

    PubMed

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.

  11. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    PubMed Central

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147

  12. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  14. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  15. Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components

    PubMed Central

    Wang, Min; Kornblau, Steven M; Coombes, Kevin R

    2018-01-01

    Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable. PMID:29881252

  16. Recombination energy in double white dwarf formation

    NASA Astrophysics Data System (ADS)

    Nandez, J. L. A.; Ivanova, N.; Lombardi, J. C.

    2015-06-01

    In this Letter, we investigate the role of recombination energy during a common envelope event. We confirm that taking this energy into account helps to avoid the formation of the circumbinary envelope commonly found in previous studies. For the first time, we can model a complete common envelope event, with a clean compact double white dwarf binary system formed at the end. The resulting binary orbit is almost perfectly circular. In addition to considering recombination energy, we also show that between 1/4 and 1/2 of the released orbital energy is taken away by the ejected material. We apply this new method to the case of the double white dwarf system WD 1101+364, and we find that the progenitor system at the start of the common envelope event consisted of an ˜1.5 M⊙ red giant star in an ˜30 d orbit with a white dwarf companion.

  17. Improved high-throughput quantification of luminescent microplate assays using a common Western-blot imaging system.

    PubMed

    Hawkins, Liam J; Storey, Kenneth B

    2017-01-01

    Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.

  18. Computational intelligence in bioinformatics: SNP/haplotype data in genetic association study for common diseases.

    PubMed

    Kelemen, Arpad; Vasilakos, Athanasios V; Liang, Yulan

    2009-09-01

    Comprehensive evaluation of common genetic variations through association of single-nucleotide polymorphism (SNP) structure with common complex disease in the genome-wide scale is currently a hot area in human genome research due to the recent development of the Human Genome Project and HapMap Project. Computational science, which includes computational intelligence (CI), has recently become the third method of scientific enquiry besides theory and experimentation. There have been fast growing interests in developing and applying CI in disease mapping using SNP and haplotype data. Some of the recent studies have demonstrated the promise and importance of CI for common complex diseases in genomic association study using SNP/haplotype data, especially for tackling challenges, such as gene-gene and gene-environment interactions, and the notorious "curse of dimensionality" problem. This review provides coverage of recent developments of CI approaches for complex diseases in genetic association study with SNP/haplotype data.

  19. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    PubMed Central

    Király, András; Abonyi, János

    2014-01-01

    During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data) and biclustering (applied to gene expression data analysis). The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers. PMID:24616651

  20. Radical Initiated Hydrosilylation on Silicon Nanocrystal Surfaces: An Evaluation of Functional Group Tolerance and Mechanistic Study.

    PubMed

    Yang, Zhenyu; Gonzalez, Christina M; Purkait, Tapas K; Iqbal, Muhammad; Meldrum, Al; Veinot, Jonathan G C

    2015-09-29

    Hydrosilylation is among the most common methods used for modifying silicon surface chemistry. It provides a wide range of surface functionalities and effective passivation of surface sites. Herein, we report a systematic study of radical initiated hydrosilylation of silicon nanocrystal (SiNC) surfaces using two common radical initiators (i.e., 2,2'-azobis(2-methylpropionitrile) and benzoyl peroxide). Compared to other widely applied hydrosilylation methods (e.g., thermal, photochemical, and catalytic), the radical initiator based approach is particle size independent, requires comparatively low reaction temperatures, and yields monolayer surface passivation after short reaction times. The effects of differing functional groups (i.e., alkene, alkyne, carboxylic acid, and ester) on the radical initiated hydrosilylation are also explored. The results indicate functionalization occurs and results in the formation of monolayer passivated surfaces.

  1. Comparison of Classical and Lazy Approach in SCG Compiler

    NASA Astrophysics Data System (ADS)

    Jirák, Ota; Kolář, Dušan

    2011-09-01

    The existing parsing methods of scattered context grammar usually expand nonterminals deeply in the pushdown. This expansion is implemented by using either a linked list, or some kind of an auxiliary pushdown. This paper describes the parsing algorithm of an LL(1) scattered context grammar. The given algorithm merges two principles together. The first approach is a table-driven parsing method commonly used for parsing of the context-free grammars. The second is a delayed execution used in functional programming. The main part of this paper is a proof of equivalence between the common principle (the whole rule is applied at once) and our approach (execution of the rules is delayed). Therefore, this approach works with the pushdown top only. In the most cases, the second approach is faster than the first one. Finally, the future work is discussed.

  2. An Application of Unfolding and Cumulative Item Response Theory Models for Noncognitive Scaling: Examining the Assumptions and Applicability of the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Sgammato, Adrienne N.

    2009-01-01

    This study examined the applicability of a relatively new unidimensional, unfolding item response theory (IRT) model called the generalized graded unfolding model (GGUM; Roberts, Donoghue, & Laughlin, 2000). A total of four scaling methods were applied. Two commonly used cumulative IRT models for polytomous data, the Partial Credit Model and…

  3. Nocturnal bees are attracted by widespread floral scents.

    PubMed

    Carvalho, Airton Torres; Maia, Artur Campos Dalia; Ojima, Poliana Yumi; dos Santos, Adauto A; Schlindwein, Clemens

    2012-03-01

    Flower localization in darkness is a challenging task for nocturnal pollinators. Floral scents often play a crucial role in guiding them towards their hosts. Using common volatile compounds of floral scents, we trapped female nocturnal Megalopta-bees (Halictidae), thus uncovering olfactory cues involved in their search for floral resources. Applying a new sampling method hereby described, we offer novel perspectives on the investigation of nocturnal bees.

  4. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  5. Determination of Pb in Biological Samples by Graphite Furnace Atomic Absorption Spectrophotometry: An Exercise in Common Interferences and Fundamental Practices in Trace Element Determination

    ERIC Educational Resources Information Center

    Spudich, Thomas M.; Herrmann, Jennifer K.; Fietkau, Ronald; Edwards, Grant A.

    2004-01-01

    An experiment is conducted to ascertain trace-level Pb in samples of bovine liver or muscle by applying graphite furnace atomic absorption spectrophotometry (GFAAS). The primary objective is to display the effects of physical and spectral intrusions in determining trace elements, and project the usual methods employed to minimize accuracy errors…

  6. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  7. Laser isotope separation of erbium and other isotopes

    DOEpatents

    Haynam, Christopher A.; Worden, Earl F.

    1995-01-01

    Laser isotope separation is accomplished using at least two photoionization pathways of an isotope simultaneously, where each pathway comprises two or more transition steps. This separation method has been applied to the selective photoionization of erbium isotopes, particularly for the enrichment of .sup.167 Er. The hyperfine structure of .sup.167 Er was used to find two three-step photoionization pathways having a common upper energy level.

  8. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  9. Data Mining Methods for Recommender Systems

    NASA Astrophysics Data System (ADS)

    Amatriain, Xavier; Jaimes*, Alejandro; Oliver, Nuria; Pujol, Josep M.

    In this chapter, we give an overview of the main Data Mining techniques used in the context of Recommender Systems. We first describe common preprocessing methods such as sampling or dimensionality reduction. Next, we review the most important classification techniques, including Bayesian Networks and Support Vector Machines. We describe the k-means clustering algorithm and discuss several alternatives. We also present association rules and related algorithms for an efficient training process. In addition to introducing these techniques, we survey their uses in Recommender Systems and present cases where they have been successfully applied.

  10. Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    1991-01-01

    A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.

  11. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  12. Chapter 10 Human Oocyte Vitrification.

    PubMed

    Rienzi, Laura; Cobo, Ana; Ubaldi, Filippo Maria

    2017-01-01

    Discovery and widespread application of successful cryopreservation methods for MII-phase oocytes was one of the greatest successes in human reproduction during the past decade. Although considerable improvements in traditional slow-rate freezing were also achieved, the real breakthrough was the result of introduction of vitrification. Here we describe the method that is most commonly applied for this purpose, provides consistent survival and in vitro developmental rates, results in pregnancy and birth rates comparable to those achievable with fresh oocytes, and does not result in higher incidence of gynecological or postnatal complications.

  13. When the firm prevents the crash: Avoiding market collapse with partial control

    PubMed Central

    2017-01-01

    Market collapse is one of the most dramatic events in economics. Such a catastrophic event can emerge from the nonlinear interactions between the economic agents at the micro level of the economy. Transient chaos might be a good description of how a collapsing market behaves. In this work, we apply a new control method, the partial control method, with the goal of avoiding this disastrous event. Contrary to common control methods that try to influence the system from the outside, here the market is controlled from the bottom up by one of the most basic components of the market—the firm. This is the first time that the partial control method is applied on a strictly economical system in which we also introduce external disturbances. We show how the firm is capable of controlling the system avoiding the collapse by only adjusting the selling price of the product or the quantity of production in accordance to the market circumstances. Additionally, we demonstrate how a firm with a large market share is capable of influencing the demand achieving price stability across the retail and wholesale markets. Furthermore, we prove that the control applied in both cases is much smaller than the external disturbances. PMID:28832608

  14. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  15. Healthy control subjects are poorly defined in case-control studies of irritable bowel syndrome

    PubMed Central

    Ghorbani, Shireen; Nejad, Amir; Law, David; Chua, Kathleen S.; Amichai, Meridythe M.; Pimentel, Mark

    2015-01-01

    Background Case-control studies are vital for understanding the pathophysiology of gastrointestinal disease. While the definition of disease is clear, the definition of healthy control is not. This is particularly relevant for functional bowel diseases such as irritable bowel syndrome (IBS). In this study, a systematic review formed the basis for a prospective study evaluating the effectiveness of commonly used techniques for defining healthy controls in IBS. Methods A systematic review of the literature was conducted to identify case-control studies involving functional gastrointestinal disorders. “Lack of Rome criteria”, self-description as “healthy” and the bowel disease questionnaire (BDQ) were common methods for identifying healthy controls. These 3 methods were then applied to a cohort of 53 non-patient subjects to determine their validity compared to objective outcome measures (7-day stool diary). Results “Lack of Rome criteria” and “healthy” self-description were the most common methods for identifying healthy control subjects, but many studies failed to describe the methods used. In the prospective study, more subjects were identified as non-healthy using the BDQ than using either lack of Rome criteria (P=0.01) or “healthy” self-description (P=0.026). Furthermore, stool diaries identified several subjects with abnormal stool form and/or frequency which were not identified using lack of Rome criteria or the “healthy” question. Comparisons revealed no agreement (κ) between the different methods for defining healthy controls. Conclusions The definitions of healthy controls in studies of functional bowel diseases such as IBS are inconsistent. Since functional symptoms are common, a strict definition of “normal” is needed in this area of research. PMID:25609236

  16. FAVR (Filtering and Annotation of Variants that are Rare): methods to facilitate the analysis of rare germline genetic variants from massively parallel sequencing datasets

    PubMed Central

    2013-01-01

    Background Characterising genetic diversity through the analysis of massively parallel sequencing (MPS) data offers enormous potential to significantly improve our understanding of the genetic basis for observed phenotypes, including predisposition to and progression of complex human disease. Great challenges remain in resolving genetic variants that are genuine from the millions of artefactual signals. Results FAVR is a suite of new methods designed to work with commonly used MPS analysis pipelines to assist in the resolution of some of the issues related to the analysis of the vast amount of resulting data, with a focus on relatively rare genetic variants. To the best of our knowledge, no equivalent method has previously been described. The most important and novel aspect of FAVR is the use of signatures in comparator sequence alignment files during variant filtering, and annotation of variants potentially shared between individuals. The FAVR methods use these signatures to facilitate filtering of (i) platform and/or mapping-specific artefacts, (ii) common genetic variants, and, where relevant, (iii) artefacts derived from imbalanced paired-end sequencing, as well as annotation of genetic variants based on evidence of co-occurrence in individuals. We applied conventional variant calling applied to whole-exome sequencing datasets, produced using both SOLiD and TruSeq chemistries, with or without downstream processing by FAVR methods. We demonstrate a 3-fold smaller rare single nucleotide variant shortlist with no detected reduction in sensitivity. This analysis included Sanger sequencing of rare variant signals not evident in dbSNP131, assessment of known variant signal preservation, and comparison of observed and expected rare variant numbers across a range of first cousin pairs. The principles described herein were applied in our recent publication identifying XRCC2 as a new breast cancer risk gene and have been made publically available as a suite of software tools. Conclusions FAVR is a platform-agnostic suite of methods that significantly enhances the analysis of large volumes of sequencing data for the study of rare genetic variants and their influence on phenotypes. PMID:23441864

  17. Visual Occlusion During Minimally Invasive Surgery: A Contemporary Review of Methods to Reduce Laparoscopic and Robotic Lens Fogging and Other Sources of Optical Loss.

    PubMed

    Manning, Todd G; Perera, Marlon; Christidis, Daniel; Kinnear, Ned; McGrath, Shannon; O'Beirne, Richard; Zotov, Paul; Bolton, Damien; Lawrentschuk, Nathan

    2017-04-01

    Maintenance of optimal vision during minimally invasive surgery is crucial to maintaining operative awareness, efficiency, and safety. Hampered vision is commonly caused by laparoscopic lens fogging (LLF), which has prompted the development of various antifogging fluids and warming devices. However, limited comparative evidence exists in contemporary literature. Despite technologic advancements there remains no consensus as to superior methods to prevent LLF or restore visual acuity once LLF has occurred. We performed a review of literature to present the current body of evidence supporting the use of numerous techniques. A standardized Preferred Reporting Items for Systematic Reviews and Meta-Analysis review was performed, and PubMed, Embase, Web of Science, and Google Scholar were searched. Articles pertaining to mechanisms and prevention of LLF were reviewed. We applied no limit to year of publication or publication type and all articles encountered were included in final review. Limited original research and heterogenous outcome measures precluded meta-analytical assessment. Vision loss has a multitude of causes and although scientific theory can be applied to in vivo environments, no authors have completely characterized this complex problem. No method to prevent or correct LLF was identified as superior to others and comparative evidence is minimal. Robotic LLF was poorly investigated and aside from a single analysis has not been directly compared to standard laparoscopic fogging in any capacity. Obscured vision during surgery is hazardous and typically caused by LLF. The etiology of LLF despite application of scientific theory is yet to be definitively proven in the in vivo environment. Common methods of prevention of LLF or restoration of vision due to LLF have little evidence-based data to support their use. A multiarm comparative in vivo analysis is required to formally assess these commonly used techniques in both standard and robotic laparoscopes.

  18. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  19. Applying knowledge-anchored hypothesis discovery methods to advance clinical and translational research: the OAMiner project

    PubMed Central

    Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N

    2012-01-01

    The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689

  20. Signal processing methods for in-situ creep specimen monitoring

    NASA Astrophysics Data System (ADS)

    Guers, Manton J.; Tittmann, Bernhard R.

    2018-04-01

    Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.

  1. Development of Control System for Hydrolysis Crystallization Process

    NASA Astrophysics Data System (ADS)

    Wan, Feng; Shi, Xiao-Ming; Feng, Fang-Fang

    2016-05-01

    Sulfate method for producing titanium dioxide is commonly used in China, but the determination of crystallization time is artificially which leads to a big error and is harmful to the operators. In this paper a new method for determining crystallization time is proposed. The method adopts the red laser as the light source, uses the silicon photocell as reflection light receiving component, using optical fiber as the light transmission element, differential algorithm is adopted in the software to realize the determination of the crystallizing time. The experimental results show that the method can realize the determination of crystallization point automatically and accurately, can replace manual labor and protect the health of workers, can be applied to practice completely.

  2. Toward a new methodological paradigm for testing theories of health behavior and health behavior change.

    PubMed

    Noar, Seth M; Mehrotra, Purnima

    2011-03-01

    Traditional theory testing commonly applies cross-sectional (and occasionally longitudinal) survey research to test health behavior theory. Since such correlational research cannot demonstrate causality, a number of researchers have called for the increased use of experimental methods for theory testing. We introduce the multi-methodological theory-testing (MMTT) framework for testing health behavior theory. The MMTT framework introduces a set of principles that broaden the perspective of how we view evidence for health behavior theory. It suggests that while correlational survey research designs represent one method of testing theory, the weaknesses of this approach demand that complementary approaches be applied. Such approaches include randomized lab and field experiments, mediation analysis of theory-based interventions, and meta-analysis. These alternative approaches to theory testing can demonstrate causality in a much more robust way than is possible with correlational survey research methods. Such approaches should thus be increasingly applied in order to more completely and rigorously test health behavior theory. Greater application of research derived from the MMTT may lead researchers to refine and modify theory and ultimately make theory more valuable to practitioners. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors

    PubMed Central

    Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig

    2015-01-01

    Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620

  4. Estimation of adsorption isotherm and mass transfer parameters in protein chromatography using artificial neural networks.

    PubMed

    Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen

    2017-03-03

    Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  5. Methods for measuring water activity (aw) of foods and its applications to moisture sorption isotherm studies.

    PubMed

    Zhang, Lida; Sun, Da-Wen; Zhang, Zhihang

    2017-03-24

    Moisture sorption isotherm is commonly determined by saturated salt slurry method, which has defects of long time cost, cumbersome labor, and microbial deterioration of samples. Thus, a novel method, a w measurement (AWM) method, has been developed to overcome these drawbacks. Fundamentals and applications of this fast method have been introduced with respects to its typical operational steps, a variety of equipment set-ups and applied samples. The resultant rapidness and reliability have been evaluated by comparing with conventional methods. This review also discussed factors impairing measurement precision and accuracy, including inappropriate choice of predryingwetting techniques and unachieved moisture uniformity in samples due to inadequate time. This analysis and corresponding suggestions can facilitate improved AWM method with more satisfying accuracy and time cost.

  6. Surface Passivation in Empirical Tight Binding

    NASA Astrophysics Data System (ADS)

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2016-03-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.

  7. Turkish Nurses' Use of Nonpharmacological Methods for Relieving Children's Postoperative Pain.

    PubMed

    Çelebioğlu, Ayda; Küçükoğlu, Sibel; Odabaşoğlu, Emel

    2015-01-01

    The experience of pain is frequently observed among children undergoing surgery. Hospitalization and surgery are stressful experiences for those children. The research was conducted to investigate and analyze Turkish nurses' use of nonpharmacological methods to relieve postoperative pain in children. The study was cross-sectional and descriptive. The study took place at 2 hospitals in eastern Turkey. Participants were 143 nurses whose patients had undergone surgical procedures at the 2 hospitals. The researchers used a questionnaire, a checklist of nonpharmacological methods, and a visual analogue scale (VAS) to collect the data. To assess the data, descriptive statistics and the χ² test were used. Of the 143 nurses, 73.4% initially had applied medication when the children had pain. Most of the nurses (58.7%) stated the children generally experienced a middle level of postoperative pain. The most frequent practices that the nurses applied after the children's surgery were (1) "providing verbal encouragement" (90.2%), a cognitive-behavioral method; (2) "a change in the child's position" (85.3%), a physical method; (3) "touch" (82.5%), a method of emotional support; and (4) "ventilation of the room" (79.7%), a regulation of the surroundings. Compared with participants with other educational levels, the cognitive-behavioral methods were the ones most commonly used by the more educated nurses (P < .05): (1) encouraging patients with rewards, (2) helping them think happy thoughts, (3) helping them use their imaginations, (4) providing music, and (5) reading books. Female nurses used the following methods more than the male nurses did (P < .05): (1) providing encouragement with rewards, (2) helping patients with deep breathing, (3) keeping a desired item beside them, (4) changing their positions, and (5) ventilating the room. Undergoing surgery is generally a painful experience for children. Nurses most commonly use cognitive-behavioral methods in the postoperative care of their pediatric patients after surgery.

  8. Decision curve analysis: a novel method for evaluating prediction models.

    PubMed

    Vickers, Andrew J; Elkin, Elena B

    2006-01-01

    Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.

  9. Interferometric imaging of crustal structure from wide-angle multicomponent OBS-airgun data

    NASA Astrophysics Data System (ADS)

    Shiraishi, K.; Fujie, G.; Sato, T.; Abe, S.; Asakawa, E.; Kodaira, S.

    2015-12-01

    In wide-angle seismic surveys with ocean bottom seismograph (OBS) and airgun, surface-related multiple reflections and upgoing P-to-S conversions are frequently observed. We applied two interferometric imaging methods to the multicomponent OBS data in order to highly utilize seismic signals for subsurface imaging.First, seismic interferometry (SI) is applied to vertical component in order to obtain reflection profile with multiple reflections. By correlating seismic traces on common receiver records, pseudo seismic data are generated with virtual sources and receivers located on all original shot positions. We adopt the deconvolution SI because source and receiver spectra can be canceled by spectral division. Consequently, gapless reflection images from just below the seafloor to the deeper are obtained.Second, receiver function (RF) imaging is applied to multicomponent OBS data in order to image P-to-S conversion boundary. Though RF is commonly applied to teleseismic data, our purpose is to extract upgoing PS converted waves from wide-angle OBS data. The RF traces are synthesized by deconvolution of radial and vertical components at same OBS location for each shot. Final section obtained by stacking RF traces shows the PS conversion boundaries beneath OBSs. Then, Vp/Vs ratio can be estimated by comparing one-way traveltime delay with two-way traveltime of P wave reflections.We applied these methods to field data sets; (a) 175 km survey in Nankai trough subduction zone using 71 OBSs with from 1 km to 10 km intervals and 878 shots with 200 m interval, and (b) 237 km survey in northwest pacific ocean with almost flat layers before subduction using 25 OBSs with 6km interval and 1188 shots with 200 m interval. In our study, SI imaging with multiple reflections is highly applicable to OBS data even in a complex geological setting, and PS conversion boundary is well imaged by RF imaging and Vp/Vs ratio distribution in sediment is estimated in case of simple structure.

  10. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  11. Inductive Double-Contingency Analysis of UO2 Powder Bulk Blending Operations at a Commercial Fuel Plant (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skiles, S. K.

    1994-12-22

    An inductive double-contingency analysis (DCA) method developed by the criticality safety function at the Savannah River Site, was applied in Criticality Safety Evaluations (CSEs) of five major plant process systems at the Westinghouse Electric Corporation`s Commercial Nuclear Fuel Manufacturing Plant in Columbia, South Carolina (WEC-Cola.). The method emphasizes a thorough evaluation of the controls intended to provide barriers against criticality for postulated initiating events, and has been demonstrated effective at identifying common mode failure potential and interdependence among multiple controls. A description of the method and an example of its application is provided.

  12. Apollo/Skylab suit program management systems study. Volume 2: Cost analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The business management methods employed in the performance of the Apollo-Skylab Suit Program are studied. The data accumulated over the span of the contract as well as the methods used to accumulate the data are examined. Management methods associated with the monitoring and control of resources applied towards the performance of the contract are also studied and recommended upon. The primary objective is the compilation, analysis, and presentation of historical cost performance criteria. Cost data are depicted for all phases of the Apollo-Skylab program in common, meaningful terms, whereby the data may be applicable to future suit program planning efforts.

  13. Tools, information sources, and methods used in deciding on drug availability in HMOs.

    PubMed

    Barner, J C; Thomas, J

    1998-01-01

    The use and importance of specific decision-making tools, information sources, and drug-use management methods in determining drug availability and use in HMOs were studied. A questionnaire was sent to 303 randomly selected HMOs. Respondents were asked to rate their use of each of four formal decision-making tools and its relative importance, as well as the use and importance of eight information sources and 11 methods for managing drug availability and use, on a 5-point scale. The survey response rate was 28%. Approximately half of the respondents reported that their HMOs used decision analysis or multiattribute analysis in deciding on drug availability. If used, these tools were rated as very important. There were significant differences in levels of use by HMO type, membership size, and age. Journal articles and reference books were reported most often as information sources. Retrospective drug-use review was used very often and perceived to be very important in managing drug use. Other management methods were used only occasionally, but the importance placed on these tools when used ranged from moderately to very important. Older organizations used most of the management methods more often than did other HMOs. Decision analysis and multiattribute analysis were the most commonly used tools for deciding on which drugs to make available to HMO members, and reference books and journal articles were the most commonly used information sources. Retrospective and prospective drug-use reviews were the most commonly applied methods for managing HMO members' access to drugs.

  14. Research on common methods for evaluating the operation effect of integrated wastewater treatment facilities of iron and steel enterprises

    NASA Astrophysics Data System (ADS)

    Bingsheng, Xu

    2017-04-01

    Considering the large quantities of wastewater generated from iron and steel enterprises in China, this paper is aimed to research the common methods applied for evaluating the integrated wastewater treatment effect of iron and steel enterprises. Based on survey results on environmental protection performance, technological economy, resource & energy consumption, services and management, an indicator system for evaluating the operation effect of integrated wastewater treatment facilities is set up. By discussing the standards and industrial policies in and out of China, 27 key secondary indicators are further defined on the basis of investigation on main equipment and key processes for wastewater treatment, so as to determine the method for setting key quantitative and qualitative indicators for evaluation indicator system. It is also expected to satisfy the basic requirements of reasonable resource allocation, environmental protection and sustainable economic development, further improve the integrated wastewater treatment effect of iron and steel enterprises, and reduce the emission of hazardous substances and environmental impact.

  15. A Method for Comparing Multivariate Time Series with Different Dimensions

    PubMed Central

    Tapinos, Avraam; Mendes, Pedro

    2013-01-01

    In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554

  16. Non-invasive method for quantitative evaluation of exogenous compound deposition on skin.

    PubMed

    Stamatas, Georgios N; Wu, Jeff; Kollias, Nikiforos

    2002-02-01

    Topical application of active compounds on skin is common to both pharmaceutical and cosmetic industries. Quantification of the concentration of a compound deposited on the skin is important in determining the optimum formulation to deliver the pharmaceutical or cosmetic benefit. The most commonly used techniques to date are either invasive or not easily reproducible. In this study, we have developed a noninvasive alternative to these techniques based on spectrofluorimetry. A mathematical model based on diffusion approximation theory is utilized to correct fluorescence measurements for the attenuation caused by endogenous skin chromophore absorption. The limitation is that the compound of interest has to be either fluorescent itself or fluorescently labeled. We used the method to detect topically applied salicylic acid. Based on the mathematical model a calibration curve was constructed that is independent of endogenous chromophore concentration. We utilized the method to localize salicylic acid in epidermis and to follow its dynamics over a period of 3 d.

  17. Numerical studies of the thermal design sensitivity calculation for a reaction-diffusion system with discontinuous derivatives

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeen S.

    1987-01-01

    The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity.

  18. Rotation in vibration, optimization, and aeroelastic stability problems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.

    1974-01-01

    The effects of rotation in the areas of vibrations, dynamic stability, optimization, and aeroelasticity were studied. The governing equations of motion for the study of vibration and dynamic stability of a rapidly rotating deformable body were developed starting from the nonlinear theory of elasticity. Some common features such as the limitations of the classical theory of elasticity, the choice of axis system, the property of self-adjointness, the phenomenon of frequency splitting, shortcomings of stability methods as applied to gyroscopic systems, and the effect of internal and external damping on stability in gyroscopic systems are identified and discussed, and are then applied to three specific problems.

  19. The Applied Behavior Analysis Research Paradigm and Single-Subject Designs in Adapted Physical Activity Research.

    PubMed

    Haegele, Justin A; Hodge, Samuel Russell

    2015-10-01

    There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.

  20. Natural hazard metaphors for financial crises

    NASA Astrophysics Data System (ADS)

    Woo, Gordon

    2001-02-01

    Linguistic metaphors drawn from natural hazards are commonly used at times of financial crisis. A brewing storm, a seismic shock, etc., evoke the abruptness and severity of a market collapse. If the language of windstorms, earthquakes and volcanic eruptions is helpful in illustrating a financial crisis, what about the mathematics of natural catastrophes? Already, earthquake prediction methods have been applied to economic recessions, and volcanic eruption forecasting techniques have been applied to market crashes. The purpose of this contribution is to survey broadly the mathematics of natural catastrophes, so as to convey the range of underlying principles, some of which may serve as mathematical metaphors for financial applications.

  1. Mentorship and competencies for applied chronic disease epidemiology.

    PubMed

    Lengerich, Eugene J; Siedlecki, Jennifer C; Brownson, Ross; Aldrich, Tim E; Hedberg, Katrina; Remington, Patrick; Siegel, Paul Z

    2003-01-01

    To understand the potential and establish a framework for mentoring as a method to develop professional competencies of state-level applied chronic disease epidemiologists, model mentorship programs were reviewed, specific competencies were identified, and competencies were then matched to essential public health services. Although few existing mentorship programs in public health were identified, common themes in other professional mentorship programs support the potential of mentoring as an effective means to develop capacity for applied chronic disease epidemiology. Proposed competencies for chronic disease epidemiologists in a mentorship program include planning, analysis, communication, basic public health, informatics and computer knowledge, and cultural diversity. Mentoring may constitute a viable strategy to build chronic disease epidemiology capacity, especially in public health agencies where resource and personnel system constraints limit opportunities to recruit and hire new staff.

  2. *K-means and cluster models for cancer signatures.

    PubMed

    Kakushadze, Zura; Yu, Willie

    2017-09-01

    We present *K-means clustering algorithm and source code by expanding statistical clustering methods applied in https://ssrn.com/abstract=2802753 to quantitative finance. *K-means is statistically deterministic without specifying initial centers, etc. We apply *K-means to extracting cancer signatures from genome data without using nonnegative matrix factorization (NMF). *K-means' computational cost is a fraction of NMF's. Using 1389 published samples for 14 cancer types, we find that 3 cancers (liver cancer, lung cancer and renal cell carcinoma) stand out and do not have cluster-like structures. Two clusters have especially high within-cluster correlations with 11 other cancers indicating common underlying structures. Our approach opens a novel avenue for studying such structures. *K-means is universal and can be applied in other fields. We discuss some potential applications in quantitative finance.

  3. Pseudo 2D elastic waveform inversion for attenuation in the near surface

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Zhang, Jie

    2017-08-01

    Seismic waveform propagation could be significantly affected by heterogeneities in the near surface zone (0 m-500 m depth). As a result, it is important to obtain as much near surface information as possible. Seismic attenuation, characterized by QP and QS factors, may affect seismic waveform in both phase and amplitude; however, it is rarely estimated and applied to the near surface zone for seismic data processing. Applying a 1D elastic full waveform modelling program, we demonstrate that such effects cannot be overlooked in the waveform computation if the value of the Q factor is lower than approximately 100. Further, we develop a pseudo 2D elastic waveform inversion method in the common midpoint (CMP) domain that jointly inverts early arrivals for QP and surface waves for QS. In this method, although the forward problem is in 1D, by applying 2D model regularization, we obtain 2D QP and QS models through simultaneous inversion. A cross-gradient constraint between the QP and Qs models is applied to ensure structural consistency of the 2D inversion results. We present synthetic examples and a real case study from an oil field in China.

  4. Hybrid multicore/vectorisation technique applied to the elastic wave equation on a staggered grid

    NASA Astrophysics Data System (ADS)

    Titarenko, Sofya; Hildyard, Mark

    2017-07-01

    In modern physics it has become common to find the solution of a problem by solving numerically a set of PDEs. Whether solving them on a finite difference grid or by a finite element approach, the main calculations are often applied to a stencil structure. In the last decade it has become usual to work with so called big data problems where calculations are very heavy and accelerators and modern architectures are widely used. Although CPU and GPU clusters are often used to solve such problems, parallelisation of any calculation ideally starts from a single processor optimisation. Unfortunately, it is impossible to vectorise a stencil structured loop with high level instructions. In this paper we suggest a new approach to rearranging the data structure which makes it possible to apply high level vectorisation instructions to a stencil loop and which results in significant acceleration. The suggested method allows further acceleration if shared memory APIs are used. We show the effectiveness of the method by applying it to an elastic wave propagation problem on a finite difference grid. We have chosen Intel architecture for the test problem and OpenMP (Open Multi-Processing) since they are extensively used in many applications.

  5. Measuring inequality: tools and an illustration.

    PubMed

    Williams, Ruth F G; Doessel, D P

    2006-05-22

    This paper examines an aspect of the problem of measuring inequality in health services. The measures that are commonly applied can be misleading because such measures obscure the difficulty in obtaining a complete ranking of distributions. The nature of the social welfare function underlying these measures is important. The overall object is to demonstrate that varying implications for the welfare of society result from inequality measures. Various tools for measuring a distribution are applied to some illustrative data on four distributions about mental health services. Although these data refer to this one aspect of health, the exercise is of broader relevance than mental health. The summary measures of dispersion conventionally used in empirical work are applied to the data here, such as the standard deviation, the coefficient of variation, the relative mean deviation and the Gini coefficient. Other, less commonly used measures also are applied, such as Theil's Index of Entropy, Atkinson's Measure (using two differing assumptions about the inequality aversion parameter). Lorenz curves are also drawn for these distributions. Distributions are shown to have differing rankings (in terms of which is more equal than another), depending on which measure is applied. The scope and content of the literature from the past decade about health inequalities and inequities suggest that the economic literature from the past 100 years about inequality and inequity may have been overlooked, generally speaking, in the health inequalities and inequity literature. An understanding of economic theory and economic method, partly introduced in this article, is helpful in analysing health inequality and inequity.

  6. Path Integral Monte Carlo Simulations of Warm Dense Matter and Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Militzer, Burkhard

    2018-01-13

    New path integral Monte Carlo simulation (PIMC) techniques will be developed and applied to derive the equation of state (EOS) for the regime of warm dense matter and dense plasmas where existing first-principles methods cannot be applied. While standard density functional theory has been used to accurately predict the structure of many solids and liquids up to temperatures on the order of 10,000 K, this method is not applicable at much higher temperature where electronic excitations become important because the number of partially occupied electronic orbitals reaches intractably large numbers and, more importantly, the use of zero-temperature exchange-correlation functionals introducesmore » an uncontrolled approximation. Here we focus on PIMC methods that become more and more efficient with increasing temperatures and still include all electronic correlation effects. In this approach, electronic excitations increase the efficiency rather than reduce it. While it has commonly been assumed such methods can only be applied to elements without core electrons like hydrogen and helium, we recently showed how to extend PIMC to heavier elements by performing the first PIMC simulations of carbon and water plasmas [Driver, Militzer, Phys. Rev. Lett. 108 (2012) 115502]. Here we propose to continue this important development to extend the reach of PIMC simulations to yet heavier elements and also lower temperatures. The goal is to provide a robust first-principles simulation method that can accurately and efficiently study materials with excited electrons at solid-state densities in order to access parts of the phase diagram such the regime of warm dense matter and plasmas where so far only more approximate, semi-analytical methods could be applied.« less

  7. Discriminative graph embedding for label propagation.

    PubMed

    Nguyen, Canh Hao; Mamitsuka, Hiroshi

    2011-09-01

    In many applications, the available information is encoded in graph structures. This is a common problem in biological networks, social networks, web communities and document citations. We investigate the problem of classifying nodes' labels on a similarity graph given only a graph structure on the nodes. Conventional machine learning methods usually require data to reside in some Euclidean spaces or to have a kernel representation. Applying these methods to nodes on graphs would require embedding the graphs into these spaces. By embedding and then learning the nodes on graphs, most methods are either flexible with different learning objectives or efficient enough for large scale applications. We propose a method to embed a graph into a feature space for a discriminative purpose. Our idea is to include label information into the embedding process, making the space representation tailored to the task. We design embedding objective functions that the following learning formulations become spectral transforms. We then reformulate these spectral transforms into multiple kernel learning problems. Our method, while being tailored to the discriminative tasks, is efficient and can scale to massive data sets. We show the need of discriminative embedding on some simulations. Applying to biological network problems, our method is shown to outperform baselines.

  8. Historical demography of common carp estimated from individuals collected from various parts of the world using the pairwise sequentially markovian coalescent approach.

    PubMed

    Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang

    2018-04-01

    The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.

  9. Thermal imaging application for behavior study of chosen nocturnal animals

    NASA Astrophysics Data System (ADS)

    Pregowski, Piotr; Owadowska, Edyta; Pietrzak, Jan

    2004-04-01

    This paper presents preliminary results of the project brought up with aim to verify the hypothesis that small, nocturnal rodents use common paths which form a common, rather stable system for fast movement. This report concentrates on results of merging uniquely good detecting features of modern IR thermal cameras with newly elaborated software. Among the final results offered by this method there are both thermal movies and single synthetic graphic images of paths traced during a few minutes or hours of investigations, as well as detailed numerical data of the ".txt" type about chosen detected events. Although it is to early to say that elaborated method will allow us to answer all ecological questions, it is possible to say that we worked out a new, valuable tool for the next steps of our project. We expect that this method enables us to solve the important ecological problems of nocturnal animals study. Supervised, stably settled area can be enlarged by use of a few thermal imagers or IR thermographic cameras, simultaneously. Presented method can be applied in other uses, even distant from presented e.g. ecological corridors detection.

  10. Physical methods for investigating structural colours in biological systems

    PubMed Central

    Vukusic, P.; Stavenga, D.G.

    2009-01-01

    Many biological systems are known to use structural colour effects to generate aspects of their appearance and visibility. The study of these phenomena has informed an eclectic group of fields ranging, for example, from evolutionary processes in behavioural biology to micro-optical devices in technologically engineered systems. However, biological photonic systems are invariably structurally and often compositionally more elaborate than most synthetically fabricated photonic systems. For this reason, an appropriate gamut of physical methods and investigative techniques must be applied correctly so that the systems' photonic behaviour may be appropriately understood. Here, we survey a broad range of the most commonly implemented, successfully used and recently innovated physical methods. We discuss the costs and benefits of various spectrometric methods and instruments, namely scatterometers, microspectrophotometers, fibre-optic-connected photodiode array spectrometers and integrating spheres. We then discuss the role of the materials' refractive index and several of the more commonly used theoretical approaches. Finally, we describe the recent developments in the research field of photonic crystals and the implications for the further study of structural coloration in animals. PMID:19158009

  11. Is a multivariate consensus representation of genetic relationships among populations always meaningful?

    PubMed Central

    Moazami-Goudarzi, K; Laloë, D

    2002-01-01

    To determine the relationships among closely related populations or species, two methods are commonly used in the literature: phylogenetic reconstruction or multivariate analysis. The aim of this article is to assess the reliability of multivariate analysis. We describe a method that is based on principal component analysis and Mantel correlations, using a two-step process: The first step consists of a single-marker analysis and the second step tests if each marker reveals the same typology concerning population differentiation. We conclude that if single markers are not congruent, the compromise structure is not meaningful. Our model is not based on any particular mutation process and it can be applied to most of the commonly used genetic markers. This method is also useful to determine the contribution of each marker to the typology of populations. We test whether our method is efficient with two real data sets based on microsatellite markers. Our analysis suggests that for closely related populations, it is not always possible to accept the hypothesis that an increase in the number of markers will increase the reliability of the typology analysis. PMID:12242255

  12. Determination of eight artificial sweeteners and common Stevia rebaudiana glycosides in non-alcoholic and alcoholic beverages by reversed-phase liquid chromatography coupled with tandem mass spectrometry.

    PubMed

    Kubica, Paweł; Namieśnik, Jacek; Wasik, Andrzej

    2015-02-01

    The method for the determination of acesulfame-K, saccharine, cyclamate, aspartame, sucralose, alitame, neohesperidin dihydrochalcone, neotame and five common steviol glycosides (rebaudioside A, rebaudioside C, steviol, steviolbioside and stevioside) in soft and alcoholic beverages was developed using high-performance liquid chromatography and tandem mass spectrometry with electrospray ionisation (HPLC-ESI-MS/MS). To the best of our knowledge, this is the first work that presents an HPLC-ESI-MS/MS method which allows for the simultaneous determination of all EU-authorised high-potency sweeteners (thaumatin being the only exception) in one analytical run. The minimalistic sample preparation procedure consisted of only two operations; dilution and centrifugation. Linearity, limits of detection and quantitation, repeatability, and trueness of the method were evaluated. The obtained recoveries at three tested concentration levels varied from 97.0 to 105.7%, with relative standard deviations lower than 4.1%. The proposed method was successfully applied for the determination of sweeteners in 24 samples of different soft and alcoholic drinks.

  13. Developing a model for the adequate description of electronic communication in hospitals.

    PubMed

    Saboor, Samrend; Ammenwerth, Elske

    2011-01-01

    Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.

  14. Development of an oximeter for neurology

    NASA Astrophysics Data System (ADS)

    Aleinik, A.; Serikbekova, Z.; Zhukova, N.; Zhukova, I.; Nikitina, M.

    2016-06-01

    Cerebral desaturation can occur during surgery manipulation, whereas other parameters vary insignificantly. Prolonged intervals of cerebral anoxia can cause serious damage to the nervous system. Commonly used method for measurement of cerebral blood flow uses invasive catheters. Other techniques include single photon emission computed tomography (SPECT), positron emission tomography (PET), magnetic resonance imaging (MRI). Tomographic methods frequently use isotope administration, that may result in anaphylactic reactions to contrast media and associated nerve diseases. Moreover, the high cost and the need for continuous monitoring make it difficult to apply these techniques in clinical practice. Cerebral oximetry is a method for measuring oxygen saturation using infrared spectrometry. Moreover reflection pulse oximetry can detect sudden changes in sympathetic tone. For this purpose the reflectance pulse oximeter for use in neurology is developed. Reflectance oximeter has a definite advantage as it can be used to measure oxygen saturation in any part of the body. Preliminary results indicate that the device has a good resolution and high reliability. Modern applied schematics have improved device characteristics compared with existing ones.

  15. A microwave-mediated saponification of galactosylceramide and galactosylceramide I3-sulfate and identification of their lyso-compounds by delayed extraction matrix-assisted laser desorption ionization time-of-flight mass spectrometry.

    PubMed

    Taketomi, T; Hara, A; Uemura, K; Kurahashi, H; Sugiyama, E

    1996-07-16

    Small amounts of galactosylceramide (cerebroside) and galactosylceramide I3-sulfate (sulfatide) obtained from porcine spinal cord and equine kidney were deacylated by a rapid method of microwave-mediated saponification to prepare their lyso-compounds. Mass spectra of their protonated or deprotonated molecular ion peaks were detected by recently developed new technology of a delayed extraction matrix-assisted laser desorption ionization time-of-flight mass spectrometer with reflector detector in positive or negative ion mode. Long chain bases of lysocerebroside and lysosulfatide were different between porcine spinal cord and equine kidney, but similar to each other in the same organ, suggesting their common synthetic pathway. It is noted that the new rapid method can be similarly applied to the deacylation of both cerebroside and sulfatide in contrast to our classical method which was able to be applied to cerebroside, but not to sulfatide.

  16. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  17. A line transect model for aerial surveys

    USGS Publications Warehouse

    Quang, Pham Xuan; Lanctot, Richard B.

    1991-01-01

    We employ a line transect method to estimate the density of the common and Pacific loon in the Yukon Flats National Wildlife Refuge from aerial survey data. Line transect methods have the advantage of automatically taking into account “visibility bias” due to detectability difference of animals at different distances from the transect line. However, line transect methods must overcome two difficulties when applied to inaccurate recording of sighting distances due to high travel speeds, so that in fact only a few reliable distance class counts are available. We propose a unimodal detection function that provides an estimate of the effective area lost due to the blind strip, under the assumption that a line of perfect detection exists parallel to the transect line. The unimodal detection function can also be applied when a blind strip is absent, and in certain instances when the maximum probability of detection is less than 100%. A simple bootstrap procedure to estimate standard error is illustrated. Finally, we present results from a small set of Monte Carlo experiments.

  18. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  19. Detection of low numbers of microplastics in North Sea fish using strict quality assurance criteria.

    PubMed

    Hermsen, Enya; Pompe, Renske; Besseling, Ellen; Koelmans, Albert A

    2017-09-15

    We investigated 400 individual fish of four North Sea species: Atlantic Herring, Sprat, Common Dab, and Whiting on ingestion of >20μm microplastic. Strict quality assurance criteria were followed in order to control contamination during the study. Two plastic particles were found in only 1 (a Sprat) out of 400 individuals (0.25%, with a 95% confidence interval of 0.09-1.1%). The particles were identified to consist of polymethylmethacrylate (PMMA) through FTIR spectroscopy. No contamination occurred during the study, showing the method applied to be suitable for microplastic ingestion studies in biota. We discuss the low particle count for North Sea fish with those in other studies and suggest a relation between reported particle count and degree of quality assurance applied. Microplastic ingestion by fish may be less common than thought initially, with low incidence shown in this study, and other studies adhering to strict quality assurance criteria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Methods and apparatuses using filter banks for multi-carrier spread spectrum signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2017-01-31

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less

  1. Methods and apparatuses using filter banks for multi-carrier spread spectrum signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A.

    2016-06-14

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less

  2. Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.

  3. Applications of machine learning and data mining methods to detect associations of rare and common variants with complex traits.

    PubMed

    Lu, Ake Tzu-Hui; Austin, Erin; Bonner, Ashley; Huang, Hsin-Hsiung; Cantor, Rita M

    2014-09-01

    Machine learning methods (MLMs), designed to develop models using high-dimensional predictors, have been used to analyze genome-wide genetic and genomic data to predict risks for complex traits. We summarize the results from six contributions to our Genetic Analysis Workshop 18 working group; these investigators applied MLMs and data mining to analyses of rare and common genetic variants measured in pedigrees. To develop risk profiles, group members analyzed blood pressure traits along with single-nucleotide polymorphisms and rare variant genotypes derived from sequence and imputation analyses in large Mexican American pedigrees. Supervised MLMs included penalized regression with varying penalties, support vector machines, and permanental classification. Unsupervised MLMs included sparse principal components analysis and sparse graphical models. Entropy-based components analyses were also used to mine these data. None of the investigators fully capitalized on the genetic information provided by the complete pedigrees. Their approaches either corrected for the nonindependence of the individuals within the pedigrees or analyzed only those who were independent. Some methods allowed for covariate adjustment, whereas others did not. We evaluated these methods using a variety of metrics. Four contributors conducted primary analyses on the real data, and the other two research groups used the simulated data with and without knowledge of the underlying simulation model. One group used the answers to the simulated data to assess power and type I errors. Although the MLMs applied were substantially different, each research group concluded that MLMs have advantages over standard statistical approaches with these high-dimensional data. © 2014 WILEY PERIODICALS, INC.

  4. Automatic control of finite element models for temperature-controlled radiofrequency ablation

    PubMed Central

    Haemmerich, Dieter; Webster, John G

    2005-01-01

    Background The finite element method (FEM) has been used to simulate cardiac and hepatic radiofrequency (RF) ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. Methods We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. Results We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100°C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. Discussion The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input. PMID:16018811

  5. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  6. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  7. Equating Two Forms of a Criterion-Referenced Test by Using Norm Referenced Data: An Illustration of Two Methods.

    ERIC Educational Resources Information Center

    Garcia-Quintana, Roan A.; Johnson, Lynne M.

    Three different computational procedures for equating two forms of a test were applied to a pair of mathematics tests to compare the results of the three procedures. The tests that were being equated were two forms of the SRA Mastery Mathematics Tests. The common, linking test used for equating was the Comprehensive Tests of Basic Skills, Form S,…

  8. PubMed Central

    Worrall, Graham; Chambers, Larry W.

    1990-01-01

    With the increasing expenditure on health care programs for seniors, there is an urgent need to evaluate such programs. The Measurement Iterative Loop is a tool that can provide both health administrators and health researchers with a method of evaluation of existing programs and identification of gaps in knowledge, and forms a rational basis for health-care policy decisions. In this article, the Loop is applied to one common problem of the elderly: dementia. PMID:21233998

  9. Laser isotope separation of erbium and other isotopes

    DOEpatents

    Haynam, C.A.; Worden, E.F.

    1995-08-22

    Laser isotope separation is accomplished using at least two photoionization pathways of an isotope simultaneously, where each pathway comprises two or more transition steps. This separation method has been applied to the selective photoionization of erbium isotopes, particularly for the enrichment of {sup 167}Er. The hyperfine structure of {sup 167}Er was used to find two three-step photoionization pathways having a common upper energy level. 3 figs.

  10. The Effect of Multispectral Image Fusion Enhancement on Human Efficiency

    DTIC Science & Technology

    2017-03-20

    human visual system by applying a technique commonly used in visual percep- tion research : ideal observer analysis. Using this approach, we establish...applications, analytic tech- niques, and procedural methods used across studies. This paper uses ideal observer analysis to establish a frame- work that allows...augmented similarly to incorpo- rate research involving more complex stimulus content. Additionally, the ideal observer can be adapted for a number of

  11. How to Correct Teaching Methods That Favour Plagiarism: Recommendations from Teachers and Students in a Spanish Language Distance Education University

    ERIC Educational Resources Information Center

    Arce Espinoza, Lourdes; Monge Nájera, Julián

    2015-01-01

    The presentation of the intellectual work of others as their own by students is believed to be common worldwide. Punishments and detection software have failed to solve the problem and have important limitations themselves. To go to the root of the problem, we applied an online questionnaire to 344 university students and their 13 teachers. Our…

  12. How robust are burn severity indices when applied in a new region? Evaluation of alternate field-based and remote-sensing methods

    Treesearch

    C. Alina Cansler; Donald McKenzie

    2012-01-01

    Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which methodological refinements to field-based and remotely sensed indices of burn severity...

  13. The retrospective chart review: important methodological considerations.

    PubMed

    Vassar, Matt; Holzmann, Matthew

    2013-01-01

    In this paper, we review and discuss ten common methodological mistakes found in retrospective chart reviews. The retrospective chart review is a widely applicable research methodology that can be used by healthcare disciplines as a means to direct subsequent prospective investigations. In many cases in this review, we have also provided suggestions or accessible resources that researchers can apply as a "best practices" guide when planning, conducting, or reviewing this investigative method.

  14. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-04

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  15. Missing Data in Clinical Studies: Issues and Methods

    PubMed Central

    Ibrahim, Joseph G.; Chu, Haitao; Chen, Ming-Hui

    2012-01-01

    Missing data are a prevailing problem in any type of data analyses. A participant variable is considered missing if the value of the variable (outcome or covariate) for the participant is not observed. In this article, various issues in analyzing studies with missing data are discussed. Particularly, we focus on missing response and/or covariate data for studies with discrete, continuous, or time-to-event end points in which generalized linear models, models for longitudinal data such as generalized linear mixed effects models, or Cox regression models are used. We discuss various classifications of missing data that may arise in a study and demonstrate in several situations that the commonly used method of throwing out all participants with any missing data may lead to incorrect results and conclusions. The methods described are applied to data from an Eastern Cooperative Oncology Group phase II clinical trial of liver cancer and a phase III clinical trial of advanced non–small-cell lung cancer. Although the main area of application discussed here is cancer, the issues and methods we discuss apply to any type of study. PMID:22649133

  16. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  17. Using Grounded Theory Method to Capture and Analyze Health Care Experiences.

    PubMed

    Foley, Geraldine; Timonen, Virpi

    2015-08-01

    Grounded theory (GT) is an established qualitative research method, but few papers have encapsulated the benefits, limits, and basic tenets of doing GT research on user and provider experiences of health care services. GT can be used to guide the entire study method, or it can be applied at the data analysis stage only. We summarize key components of GT and common GT procedures used by qualitative researchers in health care research. We draw on our experience of conducting a GT study on amyotrophic lateral sclerosis patients' experiences of health care services. We discuss why some approaches in GT research may work better than others, particularly when the focus of study is hard-to-reach population groups. We highlight the flexibility of procedures in GT to build theory about how people engage with health care services. GT enables researchers to capture and understand health care experiences. GT methods are particularly valuable when the topic of interest has not previously been studied. GT can be applied to bring structure and rigor to the analysis of qualitative data. © Health Research and Educational Trust.

  18. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  19. A method to determine agro-climatic zones based on correlation and cluster analyses

    NASA Astrophysics Data System (ADS)

    Borges Valeriano, Taynara Tuany; de Souza Rolim, Glauco; de Oliveira Aparecido, Lucas Eduardo

    2017-12-01

    Determining agro-climatic zones (ACZs) is traditionally made by cross-comparing meteorological elements such as air temperature, rainfall, and water deficit (DEF). This study proposes a new method based on correlations between monthly DEFs during the crop cycle and annual yield and performs a multivariate cluster analysis on these correlations. This `correlation method' was applied to all municipalities in the state of São Paulo to determine ACZs for coffee plantations. A traditional ACZ method for coffee, which is based on temperature and DEF ranges (Evangelista et al.; RBEAA, 6:445-452, 2002), was applied to the study area to compare against the correlation method. The traditional ACZ classified the "Alta Mogina," "Média Mogiana," and "Garça and Marília" regions as traditional coffee regions that were either suitable or even restricted for coffee plantations. These traditional regions have produced coffee since 1800 and should not be classified as restricted. The correlation method classified those areas as high-producing regions and expanded them into other areas. The proposed method is innovative, because it is more detailed than common ACZ methods. Each developmental crop phase was analyzed based on correlations between the monthly DEF and yield, improving the importance of crop physiology in relation to climate.

  20. Improvement of Microtremor Data Filtering and Processing Methods Used in Determining the Fundamental Frequency of Urban Areas

    NASA Astrophysics Data System (ADS)

    Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra

    2015-10-01

    The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.

  1. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  2. Understanding radio polarimetry. V. Making matrix self-calibration work: processing of a simulated observation

    NASA Astrophysics Data System (ADS)

    Hamaker, J. P.

    2006-09-01

    Context: .This is Paper V in a series on polarimetric aperture synthesis based on the algebra of 2×2 matrices. Aims: .It validates the matrix self-calibration theory of the preceding Paper IV and outlines the algorithmic methods that had to be developed for its application. Methods: .New avenues of polarimetric self-calibration opened up in Paper IV are explored by processing a simulated observation. To focus on the polarimetric issues, it is set up so as to sidestep some of the common complications of aperture synthesis, yet properly represent physical conditions. In addition to a representative collection of observing errors, the simulated instrument includes strongly varying Faraday rotation and antennas with unequal feeds. The selfcal procedure is described in detail, including aspects in which it differs from the scalar case, and its effects are demonstrated with a number of intermediate image results. Results: .The simulation's outcome is in full agreement with the theory. The nonlinear matrix equations for instrumental parameters are readily solved by iteration; a convergence problem is easily remedied with a new ancillary algorithm. Instrumental effects are cleanly separated from source properties without reference to changes in parallactic rotation during the observation. Polarimetric images of high purity and dynamic range result. As theory predicts, polarimetric errors that are common to all sources inevitably remain; prior knowledge of the statistics of linear and circular polarization in a typical observed field can be applied to eliminate most of them. Conclusions: .The paper conclusively demonstrates that matrix selfcal per se is a viable method that may foster substantial advancement in the art of radio polarimetry. For its application in real observations, a number of issues must be resolved that matrix selfcal has in common with its scalar sibling, such as the treatment of extended sources and the familiar sampling and aliasing problems. The close analogy between scalar interferometry and its matrix-based generalisation suggests that one may apply well-developed methods of scalar interferometry. Marrying these methods to those of this paper will require a significant investment in new software. Two such developments are known to be foreseen or underway.

  3. A grid-doubling finite-element technique for calculating dynamic three-dimensional spontaneous rupture on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael

    2009-01-01

    We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.

  4. Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images.

    PubMed

    Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel

    2017-01-01

    The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Development of SCAR (sequence-characterized amplified region) markers as a complementary tool for identification of ginger (Zingiber officinale Roscoe) from crude drugs and multicomponent formulations.

    PubMed

    Chavan, Preeti; Warude, Dnyaneshwar; Joshi, Kalpana; Patwardhan, Bhushan

    2008-05-01

    Zingiber officinale Roscoe (common or culinary ginger) is an official drug in Ayurvedic, Indian herbal, Chinese, Japanese, African and British Pharmacopoeias. The objective of the present study was to develop DNA-based markers that can be applied for the identification and differentiation of the commercially important plant Z. officinale Roscoe from the closely related species Zingiber zerumbet (pinecone, bitter or 'shampoo' ginger) and Zingiber cassumunar [cassumunar or plai (Thai) ginger]. The rhizomes of the other two Zingiber species used in the present study are morphologically similar to that of Z. officinale Roscoe and can be used as its adulterants or contaminants. Various methods, including macroscopy, microscopy and chemoprofiling, have been reported for the quality control of crude ginger and its products. These methods are reported to have limitations in distinguishing Z. officinale from closely related species. Hence, newer complementary methods for correct identification of ginger are useful. In the present study, RAPD (random amplification of polymorphic DNA) analysis was used to identify putative species-specific amplicons for Z. officinale. These were further cloned and sequenced to develop SCAR (sequence-characterized amplified region) markers. The developed SCAR markers were tested in several non-Zingiber species commonly used in ginger-containing formulations. One of the markers, P3, was found to be specific for Z. officinale and was successfully applied for detection of Z. officinale from Trikatu, a multicomponent formulation.

  6. Surface Enhanced Raman Spectroscopy (SERS) methods for endpoint and real-time quantification of miRNA assays

    NASA Astrophysics Data System (ADS)

    Restaino, Stephen M.; White, Ian M.

    2017-03-01

    Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.

  7. Evaluation of clinical methods for peroneal muscle testing.

    PubMed

    Sarig-Bahat, Hilla; Krasovsky, Andrei; Sprecher, Elliot

    2013-03-01

    Manual muscle testing of the peroneal muscles is well accepted as a testing method in musculoskeletal physiotherapy for the assessment of the foot and ankle. The peroneus longus and brevis are primary evertors and secondary plantar flexors of the ankle joint. However, some international textbooks describe them as dorsi flexors, when instructing peroneal muscle testing. The identified variability raised a question whether these educational texts are reflected in the clinical field. The purposes of this study were to investigate what are the methods commonly used in the clinical field for peroneal muscle testing and to evaluate their compatibility with functional anatomy. A cross-sectional study was conducted, using an electronic questionnaire sent to 143 Israeli physiotherapists in the musculoskeletal field. The survey questioned on the anatomical location of manual resistance and the combination of motions resisted. Ninety-seven responses were received. The majority (69%) of respondents related correctly to the peronei as evertors, but asserted that resistance should be located over the dorsal aspect of the fifth metatarsus, thereby disregarding the peroneus longus. Moreover, 38% of the respondents described the peronei as dorsi flexors, rather than plantar flexors. Only 2% selected the correct method of resisting plantarflexion and eversion at the base of the first metatarsus. We consider this technique to be the most compatible with the anatomy of the peroneus longus and brevis. The Fisher-Freeman-Halton test indicated that there was a significant relationship between responses on the questions (P = 0.0253, 95% CI 0.0249-0.0257), thus justifying further correspondence analysis. The correspondence analysis found no clustering of the answers that were compatible with anatomical evidence and were applied in the correct technique, but did demonstrate a common error, resisting dorsiflexion rather than plantarflexion, which was in agreement with the described frequencies. Inconsistencies were identified between the instruction method commonly provided for peroneal muscle testing in textbook and the functional anatomy of these muscles. Results reflect the lack of accuracy in applying functional anatomy to peroneal testing. This may be due to limited use of peroneal muscle testing or to inadequate investigation of the existing evaluation methods and their validity. Accordingly, teaching materials and clinical methods used for this test should be re-evaluated. Further research should investigate the value of peroneal muscle testing in clinical ankle evaluation. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

    PubMed

    Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

    2015-03-01

    A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

  9. Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method

    NASA Astrophysics Data System (ADS)

    Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.

    2017-11-01

    Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.

  10. IT-supported integrated care pathways for diabetes: A compilation and review of good practices.

    PubMed

    Vrijhoef, Hubertus Jm; de Belvis, Antonio Giulio; de la Calle, Matias; de Sabata, Maria Stella; Hauck, Bastian; Montante, Sabrina; Moritz, Annette; Pelizzola, Dario; Saraheimo, Markku; Guldemond, Nick A

    2017-06-01

    Integrated Care Pathways (ICPs) are a method for the mutual decision-making and organization of care for a well-defined group of patients during a well-defined period. The aim of a care pathway is to enhance the quality of care by improving patient outcomes, promoting patient safety, increasing patient satisfaction, and optimizing the use of resources. To describe this concept, different names are used, e.g. care pathways and integrated care pathways. Modern information technologies (IT) can support ICPs by enabling patient empowerment, better management, and the monitoring of care provided by multidisciplinary teams. This study analyses ICPs across Europe, identifying commonalities and success factors to establish good practices for IT-supported ICPs in diabetes care. A mixed-method approach was applied, combining desk research on 24 projects from the European Innovation Partnership on Active and Healthy Ageing (EIP on AHA) with follow-up interviews of project participants, and a non-systematic literature review. We applied a Delphi technique to select process and outcome indicators, derived from different literature sources which were compiled and applied for the identification of successful good practices. Desk research identified sixteen projects featuring IT-supported ICPs, mostly derived from the EIP on AHA, as good practices based on our criteria. Follow-up interviews were then conducted with representatives from 9 of the 16 projects to gather information not publicly available and understand how these projects were meeting the identified criteria. In parallel, the non-systematic literature review of 434 PubMed search results revealed a total of eight relevant projects. On the basis of the selected EIP on AHA project data and non-systematic literature review, no commonalities with regard to defined process or outcome indicators could be identified through our approach. Conversely, the research produced a heterogeneous picture in all aspects of the projects' indicators. Data from desk research and follow-up interviews partly lacked information on outcome and performance, which limited the comparison between practices. Applying a comprehensive set of indicators in a multi-method approach to assess the projects included in this research study did not reveal any obvious commonalities which might serve as a blueprint for future IT-supported ICP projects. Instead, an unexpected high degree of heterogeneity was observed, that may reflect diverse local implementation requirements e.g. specificities of the local healthcare system, local regulations, or preexisting structures used for the project setup. Improving the definition of and reporting on project outcomes could help advance research on and implementation of effective integrated care solutions for chronic disease management across Europe.

  11. Characterizing the evolution of climate networks

    NASA Astrophysics Data System (ADS)

    Tupikina, L.; Rehfeld, K.; Molkenthin, N.; Stolbova, V.; Marwan, N.; Kurths, J.

    2014-06-01

    Complex network theory has been successfully applied to understand the structural and functional topology of many dynamical systems from nature, society and technology. Many properties of these systems change over time, and, consequently, networks reconstructed from them will, too. However, although static and temporally changing networks have been studied extensively, methods to quantify their robustness as they evolve in time are lacking. In this paper we develop a theory to investigate how networks are changing within time based on the quantitative analysis of dissimilarities in the network structure. Our main result is the common component evolution function (CCEF) which characterizes network development over time. To test our approach we apply it to several model systems, Erdős-Rényi networks, analytically derived flow-based networks, and transient simulations from the START model for which we control the change of single parameters over time. Then we construct annual climate networks from NCEP/NCAR reanalysis data for the Asian monsoon domain for the time period of 1970-2011 CE and use the CCEF to characterize the temporal evolution in this region. While this real-world CCEF displays a high degree of network persistence over large time lags, there are distinct time periods when common links break down. This phasing of these events coincides with years of strong El Niño/Southern Oscillation phenomena, confirming previous studies. The proposed method can be applied for any type of evolving network where the link but not the node set is changing, and may be particularly useful to characterize nonstationary evolving systems using complex networks.

  12. Mass spectrometry in plant metabolomics strategies: from analytical platforms to data acquisition and processing.

    PubMed

    Ernst, Madeleine; Silva, Denise Brentan; Silva, Ricardo Roberto; Vêncio, Ricardo Z N; Lopes, Norberto Peporine

    2014-06-01

    Covering: up to 2013. Plant metabolomics is a relatively recent research field that has gained increasing interest in the past few years. Up to the present day numerous review articles and guide books on the subject have been published. This review article focuses on the current applications and limitations of the modern mass spectrometry techniques, especially in combination with electrospray ionisation (ESI), an ionisation method which is most commonly applied in metabolomics studies. As a possible alternative to ESI, perspectives on matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-MS) in metabolomics studies are introduced, a method which still is not widespread in the field. In metabolomics studies the results must always be interpreted in the context of the applied sampling procedures as well as data analysis. Different sampling strategies are introduced and the importance of data analysis is illustrated in the example of metabolic network modelling.

  13. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  14. Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring.

    PubMed

    Kallenberg, Michiel; Petersen, Kersten; Nielsen, Mads; Ng, Andrew Y; Pengfei Diao; Igel, Christian; Vachon, Celine M; Holland, Katharina; Winkel, Rikke Rass; Karssemeijer, Nico; Lillholm, Martin

    2016-05-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.

  15. Determination of Adsorption Equations for Chloro Derivatives of Aniline on Halloysite Adsorbents Using Inverse Liquid Chromatography.

    PubMed

    Słomkiewicz, Piotr M; Szczepanik, Beata; Garnuszek, Magdalena; Rogala, Paweł; Witkiewicz, Zygfryd

    2017-11-01

    Chloro derivatives of aniline are commonly used in the production of dyes, pharmaceuticals, and agricultural agents. They are toxic compounds with a large accumulation ability and low natural biodegradability. Halloysite is known as an efficient adsorbent of toxic compounds, such as phenols or herbicides, from wastewater. Inverse LC was applied to measure the adsorption of aniline and 2-chloroaniline (2-CA), 3-chloroaniline (3-CA), and 4-chloroaniline (4-CA) on halloysite adsorbents. A peak division (PD) method was used to determine a Langmuir equation in accordance with the adsorption measurement results. The values of adsorption equilibrium constants and enthalpy were determined and compared by breakthrough curve and PD methods. The physical sense of the calculated adsorption enthalpy values was checked by applying Boudart's entropy criteria. Of note, adsorption enthalpy values for halloysite adsorbents decreased in the following order: aniline > 4-CA > 2-CA > 3-CA.

  16. New approach in the evaluation of a fitness program at a worksite.

    PubMed

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  17. RAPD-PCR characterization of lactobacilli isolated from artisanal meat plants and traditional fermented sausages of Veneto region (Italy).

    PubMed

    Andrighetto, C; Zampese, L; Lombardi, A

    2001-07-01

    The study was carried out to evaluate the use of randomly amplified polymorphic DNA-polymerase chain reaction (RAPD-PCR) as a method for the identification of lactobacilli isolated from meat products. RAPD-PCR with primers M13 and D8635 was applied to the identification and intraspecific differentiation of 53 lactobacilli isolates originating from traditional fermented sausages and artisanal meat plants of the Veneto region (Italy). Most of the isolates were assigned to the species Lactobacillus sakei and Lact. curvatus; differentiation of groups of strains within the species was also possible. RAPD-PCR could be applied to the identification of lactobacilli species most commonly found in meat products. The method, which is easy and rapid to perform, could be useful for the study of the lactobacilli populations present in fermented sausages, and could help in the selection of candidate strains to use as starter cultures in meat fermentation.

  18. Simultaneous quantification of five major active components in capsules of the traditional Chinese medicine ‘Shu-Jin-Zhi-Tong’ by high performance liquid chromatography

    PubMed Central

    Yang, Xing-Xin; Zhang, Xiao-Xia; Chang, Rui-Miao; Wang, Yan-Wei; Li, Xiao-Ni

    2011-01-01

    A simple and reliable high performance liquid chromatography (HPLC) method has been developed for the simultaneous quantification of five major bioactive components in ‘Shu-Jin-Zhi-Tong’ capsules (SJZTC), for the purposes of quality control of this commonly prescribed traditional Chinese medicine. Under the optimum conditions, excellent separation was achieved, and the assay was fully validated in terms of linearity, precision, repeatability, stability and accuracy. The validated method was applied successfully to the determination of the five compounds in SJZTC samples from different production batches. The HPLC method can be used as a valid analytical method to evaluate the intrinsic quality of SJZTC. PMID:29403711

  19. THTM: A template matching algorithm based on HOG descriptor and two-stage matching

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao

    2018-04-01

    We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.

  20. Rapid detection of methanol in artisanal alcoholic beverages

    NASA Astrophysics Data System (ADS)

    de Goes, R. E.; Muller, M.; Fabris, J. L.

    2015-09-01

    In the industry of artisanal beverages, uncontrolled production processes may result in contaminated products with methanol, leading to risks for consumers. Owing to the similar odor of methanol and ethanol, as well as their common transparency, the distinction between them is a difficult task. Contamination may also occur deliberately due to the lower price of methanol when compared to ethanol. This paper describes a spectroscopic method for methanol detection in beverages based on Raman scattering and Principal Component Analysis. Associated with a refractometric assessment of the alcohol content, the method may be applied in field for a rapid detection of methanol presence.

  1. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Common Core Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the common core component of Applied Principles of Agribusiness and Natural Resources Occupations. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three…

  2. A method for the analysis of sugars in biological systems using reductive amination in combination with hydrophilic interaction chromatography and high resolution mass spectrometry.

    PubMed

    Bawazeer, Sami; Muhsen Ali, Ali; Alhawiti, Aliyah; Khalaf, Abedawn; Gibson, Colin; Tusiimire, Jonans; Watson, David G

    2017-05-01

    Separation of sugar isomers extracted from biological samples is challenging because of their natural occurrence as alpha and beta anomers and, in the case of hexoses, in their pyranose and furanose forms. A reductive amination method was developed for the tagging of sugars with the aim of it becoming part of a metabolomics work flow. The best separation of the common hexoses (glucose, fructose, mannose and galactose) was achieved when 2 H 5 -aniline was used as the tagging reagent in combination with separation on a ZICHILIC column. The method was used to tag a range of sugars including pentoses and uronic acids. The method was simple to perform and was able to improve both the separation of sugars and their response to electrospray ionisation. The method was applied to the profiling of sugars in urine where a number of hexose and pentose isomers could be observed. It was also applied to the quantification of sugars in post-mortem brain samples from three control samples and three samples from individuals who had suffered from bipolar disorder. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. LCS-TA to identify similar fragments in RNA 3D structures.

    PubMed

    Wiedemann, Jakub; Zok, Tomasz; Milostan, Maciej; Szachniuk, Marta

    2017-10-23

    In modern structural bioinformatics, comparison of molecular structures aimed to identify and assess similarities and differences between them is one of the most commonly performed procedures. It gives the basis for evaluation of in silico predicted models. It constitutes the preliminary step in searching for structural motifs. In particular, it supports tracing the molecular evolution. Faced with an ever-increasing amount of available structural data, researchers need a range of methods enabling comparative analysis of the structures from either global or local perspective. Herein, we present a new, superposition-independent method which processes pairs of RNA 3D structures to identify their local similarities. The similarity is considered in the context of structure bending and bonds' rotation which are described by torsion angles. In the analyzed RNA structures, the method finds the longest continuous segments that show similar torsion within a user-defined threshold. The length of the segment is provided as local similarity measure. The method has been implemented as LCS-TA algorithm (Longest Continuous Segments in Torsion Angle space) and is incorporated into our MCQ4Structures application, freely available for download from http://www.cs.put.poznan.pl/tzok/mcq/ . The presented approach ties torsion-angle-based method of structure analysis with the idea of local similarity identification by handling continuous 3D structure segments. The first method, implemented in MCQ4Structures, has been successfully utilized in RNA-Puzzles initiative. The second one, originally applied in Euclidean space, is a component of LGA (Local-Global Alignment) algorithm commonly used in assessing protein models submitted to CASP. This unique combination of concepts implemented in LCS-TA provides a new perspective on structure quality assessment in local and quantitative aspect. A series of computational experiments show the first results of applying our method to comparison of RNA 3D models. LCS-TA can be used for identifying strengths and weaknesses in the prediction of RNA tertiary structures.

  4. Highly sensitive screening method for nitroaromatic, nitramine and nitrate ester explosives by high performance liquid chromatography-atmospheric pressure ionization-mass spectrometry (HPLC-API-MS) in forensic applications.

    PubMed

    Xu, Xiaoma; van de Craats, Anick M; de Bruyn, Peter C A M

    2004-11-01

    A highly sensitive screening method based on high performance liquid chromatography atmospheric pressure ionization mass spectrometry (HPLC-API-MS) has been developed for the analysis of 21 nitroaromatic, nitramine and nitrate ester explosives, which include the explosives most commonly encountered in forensic science. Two atmospheric pressure ionization (API) methods, atmospheric pressure chemical ionization (APCI) and electrospray ionization (ESI), and various experimental conditions have been applied to allow for the detection of all 21 explosive compounds. The limit of detection (LOD) in the full-scan mode has been found to be 0.012-1.2 ng on column for the screening of most explosives investigated. For nitrobenzene, an LOD of 10 ng was found with the APCI method in the negative mode. Although the detection of nitrobenzene, 2-, 3-, and 4-nitrotoluene is hindered by the difficult ionization of these compounds, we have found that by forming an adduct with glycine, LOD values in the range of 3-16 ng on column can be achieved. Compared with previous screening methods with thermospray ionization, the API method has distinct advantages, including simplicity and stability of the method applied, an extended screening range and a low detection limit for the explosives studied.

  5. The extinction law from photometric data: linear regression methods

    NASA Astrophysics Data System (ADS)

    Ascenso, J.; Lombardi, M.; Lada, C. J.; Alves, J.

    2012-04-01

    Context. The properties of dust grains, in particular their size distribution, are expected to differ from the interstellar medium to the high-density regions within molecular clouds. Since the extinction at near-infrared wavelengths is caused by dust, the extinction law in cores should depart from that found in low-density environments if the dust grains have different properties. Aims: We explore methods to measure the near-infrared extinction law produced by dense material in molecular cloud cores from photometric data. Methods: Using controlled sets of synthetic and semi-synthetic data, we test several methods for linear regression applied to the specific problem of deriving the extinction law from photometric data. We cover the parameter space appropriate to this type of observations. Results: We find that many of the common linear-regression methods produce biased results when applied to the extinction law from photometric colors. We propose and validate a new method, LinES, as the most reliable for this effect. We explore the use of this method to detect whether or not the extinction law of a given reddened population has a break at some value of extinction. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (ESO programmes 069.C-0426 and 074.C-0728).

  6. Applying under-sampling techniques and cost-sensitive learning methods on risk assessment of breast cancer.

    PubMed

    Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho

    2015-04-01

    Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.

  7. The Value of Satellite Early Warning Systems in Kenya and Guatemala: Results and Lessons Learned from Contingent Valuation and Loss Avoidance Approaches

    NASA Astrophysics Data System (ADS)

    Morrison, I.; Berenter, J. S.

    2017-12-01

    SERVIR, the joint USAID and NASA initiative, conducted two studies to assess the value of two distinctly different Early Warning Systems (EWS) in Guatemala and Kenya. Each study applied a unique method to asses EWS value. The evaluation team conducted a Contingent Valuation (CV) choice experiment to measure the value of a near-real time VIIRS and MODIS-based hot-spot mapping tool for forest management professionals targeting seasonal forest fires in Northern Guatemala. The team also conducted a survey-based Damage and Loss Avoidance (DaLA) exercise to calculate the monetary benefits of a MODIS-derived frost forecasting system for farmers in the tea-growing highlands of Kenya. This presentation compares and contrasts the use and utility of these two valuation approaches to assess EWS value. Although interest in these methods is growing, few empirical studies have applied them to benefit and value assessment for EWS. Furthermore, the application of CV and DaLA methods is much less common outside of the developed world. Empirical findings from these two studies indicated significant value for two substantially different beneficiary groups: natural resource management specialists and smallholder tea farmers. Additionally, the valuation processes generated secondary information that can help improve the format and delivery of both types of EWS outputs for user and beneficiary communities in Kenya and Guatemala. Based on lessons learned from the two studies, this presentation will also compare and contrast the methodological and logistical advantages, challenges, and limitations in applying the CV and DaLA methods in developing countries. By reviewing these two valuation methods alongside each other, the authors will outline conditions where they can be applied - individually or jointly - to other early warning systems and delivery contexts.

  8. Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.

    2009-04-01

    This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.

  9. Micelle Enhanced Fluorimetric and Thin Layer Chromatography Densitometric Methods for the Determination of (±) Citalopram and its S – Enantiomer Escitalopram

    PubMed Central

    Taha, Elham A.; Salama, Nahla N.; Wang, Shudong

    2009-01-01

    Two sensitive and validated methods were developed for determination of a racemic mixture citalopram and its enantiomer S-(+) escitalopram. The first method was based on direct measurement of the intrinsic fluorescence of escitalopram using sodium dodecyl sulfate as micelle enhancer. This was further applied to determine escitalopram in spiked human plasma, as well as in the presence of common and co-administerated drugs. The second method was TLC densitometric based on various chiral selectors was investigated. The optimum TLC conditions were found to be sensitive and selective for identification and quantitative determination of enantiomeric purity of escitalopram in drug substance and drug products. The method can be useful to investigate adulteration of pure isomer with the cheap racemic form. PMID:19652757

  10. Micelle enhanced fluorimetric and thin layer chromatography densitometric methods for the determination of (+/-) citalopram and its S-enantiomer escitalopram.

    PubMed

    Taha, Elham A; Salama, Nahla N; Wang, Shudong

    2009-04-07

    Two sensitive and validated methods were developed for determination of a racemic mixture citalopram and its enantiomer S-(+) escitalopram. The first method was based on direct measurement of the intrinsic fluorescence of escitalopram using sodium dodecyl sulfate as micelle enhancer. This was further applied to determine escitalopram in spiked human plasma, as well as in the presence of common and co-administrated drugs. The second method was TLC densitometric based on various chiral selectors was investigated. The optimum TLC conditions were found to be sensitive and selective for identification and quantitative determination of enantiomeric purity of escitalopram in drug substance and drug products. The method can be useful to investigate adulteration of pure isomer with the cheap racemic form.

  11. Introduction to the focus issue: fifty years of chaos: applied and theoretical.

    PubMed

    Hikihara, Takashi; Holmes, Philip; Kambe, Tsutomu; Rega, Giuseppe

    2012-12-01

    The discovery of deterministic chaos in the late nineteenth century, its subsequent study, and the development of mathematical and computational methods for its analysis have substantially influenced the sciences. Chaos is, however, only one phenomenon in the larger area of dynamical systems theory. This Focus Issue collects 13 papers, from authors and research groups representing the mathematical, physical, and biological sciences, that were presented at a symposium held at Kyoto University from November 28 to December 2, 2011. The symposium, sponsored by the International Union of Theoretical and Applied Mechanics, was called 50 Years of Chaos: Applied and Theoretical. Following some historical remarks to provide a background for the last 50 years, and for chaos, this Introduction surveys the papers and identifies some common themes that appear in them and in the theory of dynamical systems.

  12. Real time system design of motor imagery brain-computer interface based on multi band CSP and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Li, Xiaoqin; Bian, Yan

    2018-04-01

    Motion imagery (MT) is an effective method to promote the recovery of limbs in patients after stroke. Though an online MT brain computer interface (BCT) system, which apply MT, can enhance the patient's participation and accelerate their recovery process. The traditional method deals with the electroencephalogram (EEG) induced by MT by common spatial pattern (CSP), which is used to extract information from a frequency band. Tn order to further improve the classification accuracy of the system, information of two characteristic frequency bands is extracted. The effectiveness of the proposed feature extraction method is verified by off-line analysis of competition data and the analysis of online system.

  13. EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.

    PubMed

    Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina

    2009-04-01

    In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.

  14. A simple algorithm for quantifying DNA methylation levels on multiple independent CpG sites in bisulfite genomic sequencing electropherograms.

    PubMed

    Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A

    2008-06-01

    DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.

  15. Novel liposomal technology applied in esophageal cancer treatment

    NASA Astrophysics Data System (ADS)

    Yeh, Chia-Hsien; Hsieh, Yei-San; Yang, Pei-wen; Huang, Leaf; Hsu, Yih-Chih

    2018-02-01

    Cisplatin (CDDP) has been commonly used as a chemotherapeutic drug, mainly used for the treatment of malignant epithelial cell tumors. We have developed a new method based on innovative lipid calcium phosphate, which encapsulated hydrophobic drugs to form liposomal nanoparticles. Esophageal cancer xenograft model was used to investigate the efficacy of liposomal nanoparticles. and it showed good therapeutic efficacy with lower side effects. Liposomal nanoparticles exhibited a better therapeutic effect than that of conventional CDDP.

  16. Two Paradoxes in Linear Regression Analysis.

    PubMed

    Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong

    2016-12-25

    Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.

  17. A method to estimate the additional uncertainty in gap-filled NEE resulting from long gaps in the CO2 flux record

    Treesearch

    Andrew D. Richardson; David Y. Hollinger

    2007-01-01

    Missing values in any data set create problems for researchers. The process by which missing values are replaced, and the data set is made complete, is generally referred to as imputation. Within the eddy flux community, the term "gap filling" is more commonly applied. A major challenge is that random errors in measured data result in uncertainty in the gap-...

  18. Developing Army Leaders: Lessons for Teaching Critical Thinking in Distributed, Resident, and Mixed-Delivery Venues

    DTIC Science & Technology

    2014-01-01

    Based and Affective Theories of Learning Outcomes to New Methods of Training Evaluation,” Journal of Applied Psychology Monograph, Vol. 2, No. 2, 1993...officers. Thus, the Command and Staff General School offers non-resident alternatives for the Common Core: an advanced distributed learning (ADL...course delivered online and a course combining in-person instruction and distributed learning taught in The Army School System (TASS). This report

  19. Evidence That Counts--What Happens When Teachers Apply Scientific Methods to Their Practice: Twelve Teacher-Led Randomised Controlled Trials and Other Styles of Experimental Research

    ERIC Educational Resources Information Center

    Churches, Richard; McAleavy, Tony

    2015-01-01

    This publication contains 12 (A3 open-out) poster-style reports of teacher experimental research. The style of presentation parallels the type of preliminary reporting common at academic conferences and postgraduate events. At the same time, it aims to act as a form of short primer to introduce teachers to the basic options that there are when…

  20. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  1. Measurement of "total" microcystins using the MMPB/LC/MS ...

    EPA Pesticide Factsheets

    The detection and quantification of microcystins, a family of toxins associated with harmful algal blooms, is complicated by their structural diversity and a lack of commercially available analytical standards for method development. As a result, most detection methods have focused on either a subset of microcystin congeners, as in US EPA Method 544, or on techniques which are sensitive to structural features common to most microcystins, as in the anti-ADDA ELISA method. A recent development has been the use of 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB), which is produced by chemical oxidation the ADDA moiety in most microcystin congeners, as a proxy for the sum of congeners present. Conditions for the MMPB derivatization were evaluated and applied to water samples obtained from various HAB impacted surface waters, and results were compared with congener-based LC/MS/MS and ELISA methods. The detection and quantification of microcystins, a family of toxins associated with harmful algal blooms, is complicated by their structural diversity and a lack of commercially available analytical standards for method development. As a result, most detection methods have focused on either a subset of microcystin congeners, as in US EPA Method 544, or on techniques which are sensitive to structural features common to most microcystins, as in the anti-ADDA ELISA method. A recent development has been the use of 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB), which is produce

  2. A Computer Based Moire Technique To Measure Very Small Displacements

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.

    1987-02-01

    The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.

  3. Air-cooling mathematical analysis as inferred from the air-temperature observation during the 1st total occultation of the Sun of the 21st century at Lusaka, Zambia

    NASA Astrophysics Data System (ADS)

    Peñaloza-Murillo, Marcos A.; Pasachoff, Jay M.

    2015-04-01

    We analyze mathematically air temperature measurements made near the ground by the Williams College expedition to observe the first total occultation of the Sun [TOS (commonly known as a total solar eclipse)] of the 21st century in Lusaka, Zambia, in the afternoon of June 21, 2001. To do so, we have revisited some earlier and contemporary methods to test their usefulness for this analysis. Two of these methods, based on a radiative scheme for solar radiation modeling and that has been originally applied to a morning occultation, have successfully been combined to obtain the delay function for an afternoon occultation, via derivation of the so-called instantaneous temperature profiles. For this purpose, we have followed the suggestion given by the third of these previously applied methods to calculate this function, although by itself it failed to do so at least for this occultation. The analysis has taken into account the limb-darkening, occultation and obscuration functions. The delay function obtained describes quite fairly the lag between the solar radiation variation and the delayed air temperature measured. Also, in this investigation, a statistical study has been carried out to get information on the convection activity produced during this event. For that purpose, the fluctuations generated by turbulence has been studied by analyzing variance and residuals. The results, indicating an irreversible steady decrease of this activity, are consistent with those published by other studies. Finally, the air temperature drop due to this event is well estimated by applying the empirical scheme given by the fourth of the previously applied methods, based on the daily temperature amplitude and the standardized middle time of the occultation. It is demonstrated then that by using a simple set of air temperature measurements obtained during solar occultations, along with some supplementary data, a simple mathematical analysis can be achieved by applying of the four methods reviewed here.

  4. Extracting decision rules from police accident reports through decision trees.

    PubMed

    de Oña, Juan; López, Griselda; Abellán, Joaquín

    2013-01-01

    Given the current number of road accidents, the aim of many road safety analysts is to identify the main factors that contribute to crash severity. To pinpoint those factors, this paper shows an application that applies some of the methods most commonly used to build decision trees (DTs), which have not been applied to the road safety field before. An analysis of accidents on rural highways in the province of Granada (Spain) between 2003 and 2009 (both inclusive) showed that the methods used to build DTs serve our purpose and may even be complementary. Applying these methods has enabled potentially useful decision rules to be extracted that could be used by road safety analysts. For instance, some of the rules may indicate that women, contrary to men, increase their risk of severity under bad lighting conditions. The rules could be used in road safety campaigns to mitigate specific problems. This would enable managers to implement priority actions based on a classification of accidents by types (depending on their severity). However, the primary importance of this proposal is that other databases not used here (i.e. other infrastructure, roads and countries) could be used to identify unconventional problems in a manner easy for road safety managers to understand, as decision rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Morpho-functional implications of myofascial stretching applied to muscle chains: A case study.

    PubMed

    Raţ, Bogdan Constantin; Raţă, Marinela; Antohe, Bogdan

    2018-03-16

    Most lesions of the soft tissues, especially those at the muscle level, are due to the lack of elasticity of the connective tissue and fascia. Stretching is one of the most commonly used methods of treatment for such musculoskeletal issues. This study tracks the effects of stretching on the electromyographic activity of muscle chains, applied to a 24-year-old athlete diagnosed with the Haglund's disease. For the evaluation, we used visual examination and surface electromyography (maximum volumetric isometric contraction). The therapeutic intervention consisted in the application of the static stretching positions, which intended the elongation of the shortened muscle chains. The treatment program had a duration of 2 months, with a frequency of 2 sessions per week and an average duration of 60 minutes. The posterior muscle chains recorded an increase in the EMG activity, while the anterior muscle chains tended to diminish their EMG activity. As a result of the applied treatment, all the evaluated muscle chains recorded a rebalancing of the electromyographic activity, demonstrating the efficiency of stretching as a method of global treatment of muscle chains. By analysing all the data, we have come to the conclusion that static stretching is an effective treatment method for shortened muscle chains.

  6. Evolving Concepts on Adjusting Human Resting Energy Expenditure Measurements for Body Size

    PubMed Central

    Heymsfield, Steven B.; Thomas, Diana; Bosy-Westphal, Anja; Shen, Wei; Peterson, Courtney M.; Müller, Manfred J.

    2012-01-01

    Establishing if an adult’s resting energy expenditure (REE) is high or low for their body size is a pervasive question in nutrition research. Early workers applied body mass and height as size measures and formulated the Surface Law and Kleiber’s Law, although each has limitations when adjusting REE. Body composition methods introduced during the mid-twentieth century provided a new opportunity to identify metabolically homogeneous “active” compartments. These compartments all show improved correlations with REE estimates over body mass-height approaches, but collectively share a common limitation: REE-body composition ratios are not “constant” but vary across men and women and with race, age, and body size. The now-accepted alternative to ratio-based norms is to adjust for predictors by applying regression models to calculate “residuals” that establish if a REE is relatively high or low. The distinguishing feature of statistical REE-body composition models is a “non-zero” intercept of unknown origin. The recent introduction of imaging methods has allowed development of physiological tissue-organ based REE prediction models. Herein we apply these imaging methods to provide a mechanistic explanation, supported by experimental data, for the non-zero intercept phenomenon and in that context propose future research directions for establishing between subject differences in relative energy metabolism. PMID:22863371

  7. Processing-optimised imaging of analog geological models by electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Ortiz Alemán, C.; Espíndola-Carmona, A.; Hernández-Gómez, J. J.; Orozco Del Castillo, MG

    2017-06-01

    In this work, the electrical capacitance tomography (ECT) technique is applied in monitoring internal deformation of geological analog models, which are used to study structural deformation mechanisms, in particular for simulating migration and emplacement of allochtonous salt bodies. A rectangular ECT sensor was used for internal visualization of analog geologic deformation. The monitoring of analog models consists in the reconstruction of permittivity images from the capacitance measurements obtained by introducing the model inside the ECT sensor. A simulated annealing (SA) algorithm is used as a reconstruction method, and is optimized by taking full advantage of some special features in a linearized version of this inverse approach. As a second part of this work our SA image reconstruction algorithm is applied to synthetic models, where its performance is evaluated in comparison to other commonly used algorithms such as linear back-projection and iterative Landweber methods. Finally, the SA method is applied to visualise two simple geological analog models. Encouraging results were obtained in terms of the quality of the reconstructed images, as interfaces corresponding to main geological units in the analog model were clearly distinguishable in them. We found reliable results quite useful for real time non-invasive monitoring of internal deformation of analog geological models.

  8. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  9. Higuchi’s Method applied to detection of changes in timbre of digital sound synthesis of string instruments with the functional transformation method

    NASA Astrophysics Data System (ADS)

    Kanjanapen, Manorth; Kunsombat, Cherdsak; Chiangga, Surasak

    2017-09-01

    The functional transformation method (FTM) is a powerful tool for detailed investigation of digital sound synthesis by the physical modeling method, the resulting sound or measured vibrational characteristics at discretized points on real instruments directly solves the underlying physical effect of partial differential equation (PDE). In this paper, we present the Higuchi’s method to examine the difference between the timbre of tone and estimate fractal dimension of musical signals which contains information about their geometrical structure that synthesizes by FTM. With the Higuchi’s method we obtain the whole process is not complicated, fast processing, with the ease of analysis without expertise in the physics or virtuoso musicians and the easiest way for the common people can judge that sounds similarly presented.

  10. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  11. Diffuse reflectance startigraphy - a new method in the study of loess (?)

    NASA Astrophysics Data System (ADS)

    József, Szeberényi; Balázs, Bradák; Klaudia, Kiss; József, Kovács; György, Varga; Réka, Balázs; Viczián, István

    2017-04-01

    The different varieties of loess (and intercalated paleosol layers) together constitute one of the most widespread terrestrial sediments, which was deposited, altered, and redeposited in the course of the changing climatic conditions of the Pleistocene. To reveal more information about Pleistocene climate cycles and/or environments the detailed lithostratigraphical subdivision and classification of the loess variations and paleosols are necessary. Beside the numerous method such as various field measurements, semi-quantitative tests and laboratory investigations, diffuse reflectance spectroscopy (DRS) is one of the well applied method on loess/paleosol sequences. Generally, DRS has been used to separate the detrital and pedogenic mineral component of the loess sections by the hematite/goethite ratio. DRS also has been applied as a joint method of various environmental magnetic investigations such as magnetic susceptibility- and isothermal remanent magnetization measurements. In our study the so-called "diffuse reflectance stratigraphy method" were developed. At First, complex mathematical method was applied to compare the results of the spectral reflectance measurements. One of the most preferred multivariate methods is cluster analysis. Its scope is to group and compare the loess variations and paleosol based on the similarity and common properties of their reflectance curves. In the Second, beside the basic subdivision of the profiles by the different reflectance curves of the layers, the most characteristic wavelength section of the reflectance curve was determined. This sections played the most important role during the classification of the different materials of the section. The reflectance value of individual samples, belonged to the characteristic wavelength were depicted in the function of depth and well correlated with other proxies like grain size distribution and magnetic susceptibility data. The results of the correlation showed the significance of the "combined reflectance stratigraphy" as a stratigraphical method and as an environmental proxy also.

  12. FIXED DOSE COMBINATIONS WITH SELECTIVE BETA-BLOCKERS: QUANTITATIVE DETERMINATION IN BIOLOGICAL FLUIDS.

    PubMed

    Mahu, Ştefania Corina; Hăncianu, Monica; Agoroaei, Luminiţa; Grigoriu, Ioana Cezara; Strugaru, Anca Monica; Butnaru, Elena

    2015-01-01

    Hypertension is one of the most common causes of death, a complex and incompletely controlled disease for millions of patients. Metoprolol, bisoprolol, nebivolol and atenolol are selective beta-blockers frequently used in the management of arterial hypertension, alone or in fixed combination with other substances. This study presents the most used analytical methods for simultaneous determination in biological fluids of fixed combinations containing selective beta-blockers. Articles in Pub-Med, Science Direct and Wiley Journals databases published between years 2004-2014 were reviewed. Methods such as liquid chromatography--mass spectrometry--mass spectrometry (LC-MS/MS), high performance liquid chromatography (HPLC) or high performance liquid chromatography--mass spectrometry (HPLC-MS) were used for determination of fixed combination with beta-blockers in human plasma, rat plasma and human breast milk. LC-MS/MS method was used for simultaneous determination of fixed combinations of metoprolol with simvastatin, hydrochlorothiazide or ramipril, combinations of nebivolol and valsartan, or atenolol and amlodipine. Biological samples were processed by protein precipitation techniques or by liquid-liquid extraction. For the determination of fixed dose combinations of felodipine and metoprolol in rat plasma liquid chromatography--electrospray ionization--mass spectrometry (LC-ESI-MS/MS) was applied, using phenacetin as internal standard. HPLC-MS method was applied for the determination of bisoprolol and hydrochlorothiazide in human plasma. For the determination of atenolol and chlorthalidone from human breast milk and human plasma the HPLC method was used. The analytical methods were validated according to the specialized guidelines, and were applied to biological samples, thing that confirms the permanent concern of researchers in this field.

  13. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Automatic control of finite element models for temperature-controlled radiofrequency ablation.

    PubMed

    Haemmerich, Dieter; Webster, John G

    2005-07-14

    The finite element method (FEM) has been used to simulate cardiac and hepatic radiofrequency (RF) ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100 degrees C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.

  15. The Potential of Sequential Extraction in the Characterisation and Management of Wastes from Steel Processing: A Prospective Review

    PubMed Central

    Rodgers, Kiri J.; Hursthouse, Andrew; Cuthbert, Simon

    2015-01-01

    As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes. PMID:26393631

  16. The Potential of Sequential Extraction in the Characterisation and Management of Wastes from Steel Processing: A Prospective Review.

    PubMed

    Rodgers, Kiri J; Hursthouse, Andrew; Cuthbert, Simon

    2015-09-18

    As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes.

  17. Critical Void Volume Fraction fc at Void Coalescence for S235JR Steel at Low Initial Stress Triaxiality

    NASA Astrophysics Data System (ADS)

    Grzegorz Kossakowski, Paweł; Wciślik, Wiktor

    2017-10-01

    The paper is concerned with the nucleation, growth and coalescence of microdefects in the form of voids in S235JR steel. The material is known to be one of the basic steel grades commonly used in the construction industry. The theory and methods of damage mechanics were applied to determine and describe the failure mechanisms that occur when the material undergoes deformation. Until now, engineers have generally employed the Gurson-Tvergaard- Needleman model. This material model based on damage mechanics is well suited to define and analyze failure processes taking place in the microstructure of S235JR steel. It is particularly important to determine the critical void volume fraction fc , which is one of the basic parameters of the Gurson-Tvergaard-Needleman material model. As the critical void volume fraction fc refers to the failure stage, it is determined from the data collected for the void coalescence phase. A case of multi-axial stresses is considered taking into account the effects of spatial stress state. In this study, the parameter of stress triaxiality η was used to describe the failure phenomena. Cylindrical tensile specimens with a circumferential notch were analysed to obtain low values of initial stress triaxiality (η = 0.556 of the range) in order to determine the critical void volume fraction fc . It is essential to emphasize how unique the method applied is and how different it is from the other more common methods involving parameter calibration, i.e. curve-fitting methods. The critical void volume fraction fc at void coalescence was established through digital image analysis of surfaces of S235JR steel, which involved studying real, physical results obtained directly from the material tested.

  18. Predicting the susceptibility to gully initiation in data-poor regions

    NASA Astrophysics Data System (ADS)

    Dewitte, Olivier; Daoudi, Mohamed; Bosco, Claudio; Van Den Eeckhaut, Miet

    2015-01-01

    Permanent gullies are common features in many landscapes and quite often they represent the dominant soil erosion process. Once a gully has initiated, field evidence shows that gully channel formation and headcut migration rapidly occur. In order to prevent the undesired effects of gullying, there is a need to predict the places where new gullies might initiate. From detailed field measurements, studies have demonstrated strong inverse relationships between slope gradient of the soil surface (S) and drainage area (A) at the point of channel initiation across catchments in different climatic and morphological environments. Such slope-area thresholds (S-A) can be used to predict locations in the landscape where gullies might initiate. However, acquiring S-A requires detailed field investigations and accurate high resolution digital elevation data, which are usually difficult to acquire. To circumvent this issue, we propose a two-step method that uses published S-A thresholds and a logistic regression analysis (LR). S-A thresholds from the literature are used as proxies of field measurement. The method is calibrated and validated on a watershed, close to the town of Algiers, northern Algeria, where gully erosion affects most of the slopes. The gullies extend up to several kilometres in length and cover 16% of the study area. First we reconstruct the initiation areas of the existing gullies by applying S-A thresholds for similar environments. Then, using the initiation area map as the dependent variable with combinations of topographic and lithological predictor variables, we calibrate several LR models. It provides relevant results in terms of statistical reliability, prediction performance, and geomorphological significance. This method using S-A thresholds with data-driven assessment methods like LR proves to be efficient when applied to common spatial data and establishes a methodology that will allow similar studies to be undertaken elsewhere.

  19. Qualitative Methods in Mental Health Services Research

    PubMed Central

    Palinkas, Lawrence A.

    2014-01-01

    Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This paper reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the papers included in this special series along with representative examples from the literature. Qualitative methods are used to provide a “thick description” or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods, but often differ with respect to study design, data collection and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semi-structured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research. PMID:25350675

  20. Qualitative and mixed methods in mental health services and implementation research.

    PubMed

    Palinkas, Lawrence A

    2014-01-01

    Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This article reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the articles included in this special series along with representative examples from the literature. Qualitative methods are used to provide a "thick description" or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods but often differ with respect to study design, data collection, and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semistructured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed-method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research.

  1. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  2. Human ergology that promotes participatory approach to improving safety, health and working conditions at grassroots workplaces: achievements and actions.

    PubMed

    Kawakami, Tsuyoshi

    2011-12-01

    Participatory approaches are increasingly applied to improve safety, health and working conditions of grassroots workplaces in Asia. The core concepts and methods in human ergology research such as promoting real work life studies, relying on positive efforts of local people (daily life-technology), promoting active participation of local people to identify practical solutions, and learning from local human networks to reach grassroots workplaces, have provided useful viewpoints to devise such participatory training programmes. This study was aimed to study and analyze how human ergology approaches were applied in the actual development and application of three typical participatory training programmes: WISH (Work Improvement for Safe Home) with home workers in Cambodia, WISCON (Work Improvement in Small Construction Sites) with construction workers in Thailand, and WARM (Work Adjustment for Recycling and Managing Waste) with waste collectors in Fiji. The results revealed that all the three programmes, in the course of their developments, commonly applied direct observation methods of the work of target workers before devising the training programmes, learned from existing local good examples and efforts, and emphasized local human networks for cooperation. These methods and approaches were repeatedly applied in grassroots workplaces by taking advantage of their the sustainability and impacts. It was concluded that human ergology approaches largely contributed to the developments and expansion of participatory training programmes and could continue to support the self-help initiatives of local people for promoting human-centred work.

  3. Morphometry-based impedance boundary conditions for patient-specific modeling of blood flow in pulmonary arteries.

    PubMed

    Spilker, Ryan L; Feinstein, Jeffrey A; Parker, David W; Reddy, V Mohan; Taylor, Charles A

    2007-04-01

    Patient-specific computational models could aid in planning interventions to relieve pulmonary arterial stenoses common in many forms of congenital heart disease. We describe a new approach to simulate blood flow in subject-specific models of the pulmonary arteries that consists of a numerical model of the proximal pulmonary arteries created from three-dimensional medical imaging data with terminal impedance boundary conditions derived from linear wave propagation theory applied to morphometric models of distal vessels. A tuning method, employing numerical solution methods for nonlinear systems of equations, was developed to modify the distal vasculature to match measured pressure and flow distribution data. One-dimensional blood flow equations were solved with a finite element method in image-based pulmonary arterial models using prescribed inlet flow and morphometry-based impedance at the outlets. Application of these methods in a pilot study of the effect of removal of unilateral pulmonary arterial stenosis induced in a pig showed good agreement with experimental measurements for flow redistribution and main pulmonary arterial pressure. Next, these methods were applied to a patient with repaired tetralogy of Fallot and predicted insignificant hemodynamic improvement with relief of the stenosis. This method of coupling image-based and morphometry-based models could enable increased fidelity in pulmonary hemodynamic simulation.

  4. Introduction to the JEEG Agricultural Geophysics Special Issue

    USGS Publications Warehouse

    Allred, Barry J.; Smith, Bruce D.

    2010-01-01

    Near-surface geophysical methods have become increasingly important tools in applied agricultural practices and studies. The great advantage of geophysical methods is their potential rapidity, low cost, and spatial continuity when compared to more traditional methods of assessing agricultural land, such as sample collection and laboratory analysis. Agricultural geophysics investigations commonly focus on obtaining information within the soil profile, which generally does not extend much beyond 2 meters beneath the ground surface. Although the depth of interest oftentimes is rather shallow, the area covered by an agricultural geophysics survey can vary widely in scale, from experimental plots (10 s to 100 s of square meters), to farm fields (10 s to 100 s of hectares), up to the size of watersheds (10 s to 100 s of square kilometers). To date, three predominant methods—resistivity, electromagnetic induction (EMI), and ground-penetrating radar (GPR)—have been used to obtain surface-based geophysical measurements within agricultural settings. However, a recent conference on agricultural geophysics (Bouyoucos Conference on Agricultural Geophysics, September 8–10, 2009, Albuquerque, New Mexico; www.ag-geophysics.org) illustrated that other geophysical methods are being applied or developed. These include airborne electromagnetic induction, magnetometry, seismic, and self-potential methods. Agricultural geophysical studies are also being linked to ground water studies that utilize deeper penetrating geophysical methods than normally used.

  5. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  6. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  7. Comparing and improving reconstruction methods for proxies based on compositional data

    NASA Astrophysics Data System (ADS)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  8. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  9. Responses to applied forces and the Jarzynski equality in classical oscillator systems coupled to finite baths: an exactly solvable nondissipative nonergodic model.

    PubMed

    Hasegawa, Hideo

    2011-07-01

    Responses of small open oscillator systems to applied external forces have been studied with the use of an exactly solvable classical Caldeira-Leggett model in which a harmonic oscillator (system) is coupled to finite N-body oscillators (bath) with an identical frequency (ω(n) = ω(o) for n = 1 to N). We have derived exact expressions for positions, momenta, and energy of the system in nonequilibrium states and for work performed by applied forces. A detailed study has been made on an analytical method for canonical averages of physical quantities over the initial equilibrium state, which is much superior to numerical averages commonly adopted in simulations of small systems. The calculated energy of the system which is strongly coupled to a finite bath is fluctuating but nondissipative. It has been shown that the Jarzynski equality is valid in nondissipative nonergodic open oscillator systems regardless of the rate of applied ramp force.

  10. Multisubject Learning for Common Spatial Patterns in Motor-Imagery BCI

    PubMed Central

    Devlaminck, Dieter; Wyns, Bart; Grosse-Wentrup, Moritz; Otte, Georges; Santens, Patrick

    2011-01-01

    Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low. PMID:22007194

  11. RF kicker cavity to increase control in common transport lines

    DOEpatents

    Douglas, David R.; Ament, Lucas J. P.

    2017-04-18

    A method of controlling e-beam transport where electron bunches with different characteristics travel through the same beam pipe. An RF kicker cavity is added at the beginning of the common transport pipe or at various locations along the common transport path to achieve independent control of different bunch types. RF energy is applied by the kicker cavity kicks some portion of the electron bunches, separating the bunches in phase space to allow independent control via optics, or separating bunches into different beam pipes. The RF kicker cavity is operated at a specific frequency to enable kicking of different types of bunches in different directions. The phase of the cavity is set such that the selected type of bunch passes through the cavity when the RF field is at a node, leaving that type of bunch unaffected. Beam optics may be added downstream of the kicker cavity to cause a further separation in phase space.

  12. [Identification of common medicinal snakes in medicated liquor of Guangdong by COI barcode sequence].

    PubMed

    Liao, Jing; Chao, Zhi; Zhang, Liang

    2013-11-01

    To identify the common snakes in medicated liquor of Guangdong using COI barcode sequence,and to test the feasibility. The COI barcode sequences of collected medicinal snakes were amplified and sequenced. The sequences combined with the data from GenBank were analyzed for divergence and building a neighbor-joining(NJ) tree with MEGA 5.0. The genetic distance and NJ tree demonstrated that there were 241 variable sites in these species, and the average (A + T) content of 56.2% was higher than the average (G + C) content of 43.7%. The maximum interspecific genetic distance was 0.2568, and the minimum was 0. 1519. In the NJ tree,each species formed a monophyletic clade with bootstrap supports of 100%. DNA barcoding identification method based on the COI sequence is accurate and can be applied to identify the common medicinal snakes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timonen, Hilkka; Cubison, Mike; Aurela, Minna

    The applicability, methods and limitations of constrained peak fitting on mass spectra of low mass resolving power ( m/Δ m 50~500) recorded with a time-of-flight aerosol chemical speciation monitor (ToF-ACSM) are explored. Calibration measurements as well as ambient data are used to exemplify the methods that should be applied to maximise data quality and assess confidence in peak-fitting results. Sensitivity analyses and basic peak fit metrics such as normalised ion separation are employed to demonstrate which peak-fitting analyses commonly performed in high-resolution aerosol mass spectrometry are appropriate to perform on spectra of this resolving power. Information on aerosol sulfate, nitrate,more » sodium chloride, methanesulfonic acid as well as semi-volatile metal species retrieved from these methods is evaluated. The constants in a commonly used formula for the estimation of the mass concentration of hydrocarbon-like organic aerosol may be refined based on peak-fitting results. Lastly, application of a recently published parameterisation for the estimation of carbon oxidation state to ToF-ACSM spectra is validated for a range of organic standards and its use demonstrated for ambient urban data.« less

  14. A common and optimized age scale for Antarctic ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, F.; Veres, D.; Landais, A.; Bazin, L.; Lemieux-Dudon, B.; Toye Mahamadou Kele, H.; Wolff, E.; Martinerie, P.

    2012-04-01

    Dating ice cores is a complex problem because 1) there is a age shift between the gas bubbles and the surrounding ice 2) there are many different ice cores which can be synchronized with various proxies and 3) there are many methods to date the ice and the gas bubbles, each with advantages and drawbacks. These methods fall into the following categories: 1) Ice flow (for the ice) and firn densification modelling (for the gas bubbles); 2) Comparison of ice core proxies with insolation variations (so-called orbital tuning methods); 3) Comparison of ice core proxies with other well dated archives; 4) Identification of well-dated horizons, such as tephra layers or geomagnetic anomalies. Recently, an new dating tool has been developped (DATICE, Lemieux-Dudon et al., 2010), to take into account all the different dating information into account and produce a common and optimal chronology for ice cores with estimated confidence intervals. In this talk we will review the different dating information for Antarctic ice cores and show how the DATICE tool can be applied.

  15. Qualitative Research in Emergency Care Part I: Research Principles and Common Applications.

    PubMed

    Choo, Esther K; Garro, Aris C; Ranney, Megan L; Meisel, Zachary F; Morrow Guthrie, Kate

    2015-09-01

    Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. This article, Part I of a two-article series, provides an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field, including observation, individual interviews, and focus groups. In Part II of this series, we will outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. © 2015 by the Society for Academic Emergency Medicine.

  16. Signal Detection and Monitoring Based on Longitudinal Healthcare Data

    PubMed Central

    Suling, Marc; Pigeot, Iris

    2012-01-01

    Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald’s sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced. PMID:24300373

  17. Internal consistency of the self-reporting questionnaire-20 in occupational groups

    PubMed Central

    Santos, Kionna Oliveira Bernardes; Carvalho, Fernando Martins; de Araújo, Tânia Maria

    2016-01-01

    ABSTRACT OBJECTIVE To assess the internal consistency of the measurements of the Self-Reporting Questionnaire (SRQ-20) in different occupational groups. METHODS A validation study was conducted with data from four surveys with groups of workers, using similar methods. A total of 9,959 workers were studied. In all surveys, the common mental disorders were assessed via SRQ-20. The internal consistency considered the items belonging to dimensions extracted by tetrachoric factor analysis for each study. Item homogeneity assessment compared estimates of Cronbach’s alpha (KD-20), the alpha applied to a tetrachoric correlation matrix and stratified Cronbach’s alpha. RESULTS The SRQ-20 dimensions showed adequate values, considering the reference parameters. The internal consistency of the instrument items, assessed by stratified Cronbach’s alpha, was high (> 0.80) in the four studies. CONCLUSIONS The SRQ-20 showed good internal consistency in the professional categories evaluated. However, there is still a need for studies using alternative methods and additional information able to refine the accuracy of latent variable measurement instruments, as in the case of common mental disorders. PMID:27007682

  18. Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.

    PubMed

    Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai

    2015-12-01

    The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Novel vehicle detection system based on stacked DoG kernel and AdaBoost

    PubMed Central

    Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun

    2018-01-01

    This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727

  20. Investigating Correlation between Protein Sequence Similarity and Semantic Similarity Using Gene Ontology Annotations.

    PubMed

    Ikram, Najmul; Qadir, Muhammad Abdul; Afzal, Muhammad Tanvir

    2018-01-01

    Sequence similarity is a commonly used measure to compare proteins. With the increasing use of ontologies, semantic (function) similarity is getting importance. The correlation between these measures has been applied in the evaluation of new semantic similarity methods, and in protein function prediction. In this research, we investigate the relationship between the two similarity methods. The results suggest absence of a strong correlation between sequence and semantic similarities. There is a large number of proteins with low sequence similarity and high semantic similarity. We observe that Pearson's correlation coefficient is not sufficient to explain the nature of this relationship. Interestingly, the term semantic similarity values above 0 and below 1 do not seem to play a role in improving the correlation. That is, the correlation coefficient depends only on the number of common GO terms in proteins under comparison, and the semantic similarity measurement method does not influence it. Semantic similarity and sequence similarity have a distinct behavior. These findings are of significant effect for future works on protein comparison, and will help understand the semantic similarity between proteins in a better way.

  1. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  2. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-06-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  3. Similarity of markers identified from cancer gene expression studies: observations from GEO.

    PubMed

    Shi, Xingjie; Shen, Shihao; Liu, Jin; Huang, Jian; Zhou, Yong; Ma, Shuangge

    2014-09-01

    Gene expression profiling has been extensively conducted in cancer research. The analysis of multiple independent cancer gene expression datasets may provide additional information and complement single-dataset analysis. In this study, we conduct multi-dataset analysis and are interested in evaluating the similarity of cancer-associated genes identified from different datasets. The first objective of this study is to briefly review some statistical methods that can be used for such evaluation. Both marginal analysis and joint analysis methods are reviewed. The second objective is to apply those methods to 26 Gene Expression Omnibus (GEO) datasets on five types of cancers. Our analysis suggests that for the same cancer, the marker identification results may vary significantly across datasets, and different datasets share few common genes. In addition, datasets on different cancers share few common genes. The shared genetic basis of datasets on the same or different cancers, which has been suggested in the literature, is not observed in the analysis of GEO data. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Qualitative Research in Emergency Care Part I: Research Principles and Common Applications

    PubMed Central

    Choo, Esther K.; Garro, Aris; Ranney, Megan L.; Meisel, Zachary; Guthrie, Kate Morrow

    2015-01-01

    Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. This article, Part I of a two-article series, provides an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field, including observation, individual interviews, and focus groups. In Part II of this series, we will outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. PMID:26284696

  5. 32 CFR Appendix D to Part 37 - What Common National Policy Requirements May Apply and Need To Be Included in TIAs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Apply and Need To Be Included in TIAs? D Appendix D to Part 37 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pt. 37, App. D Appendix D to Part 37—What Common National Policy Requirements May Apply and Need To...

  6. Common Structure in Different Physical Properties: Electrical Conductivity and Surface Waves Phase Velocity

    NASA Astrophysics Data System (ADS)

    Mandolesi, E.; Jones, A. G.; Roux, E.; Lebedev, S.

    2009-12-01

    Recently different studies were undertaken on the correlation between diverse geophysical datasets. Magnetotelluric (MT) data are used to map the electrical conductivity structure behind the Earth, but one of the problems in MT method is the lack in resolution in mapping zones beneath a region of high conductivity. Joint inversion of different datasets in which a common structure is recognizable reduces non-uniqueness and may improve the quality of interpretation when different dataset are sensitive to different physical properties with an underlined common structure. A common structure is recognized if the change of physical properties occur at the same spatial locations. Common structure may be recognized in 1D inversion of seismic and MT datasets, and numerous authors show that also 2D common structure may drive to an improvement of inversion quality while dataset are jointly inverted. In this presentation a tool to constrain MT 2D inversion with phase velocity of surface wave seismic data (SW) is proposed and is being developed and tested on synthetic data. Results obtained suggest that a joint inversion scheme could be applied with success along a section profile for which data are compatible with a 2D MT model.

  7. Predicting missing links via correlation between nodes

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhang, Yi-Cheng

    2015-10-01

    As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.

  8. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  9. Estimating intervention effects of prevention programs: Accounting for noncompliance

    PubMed Central

    Stuart, Elizabeth A.; Perry, Deborah F.; Le, Huynh-Nhu; Ialongo, Nicholas S.

    2010-01-01

    Individuals not fully complying with their assigned treatments is a common problem encountered in randomized evaluations of behavioral interventions. Treatment group members rarely attend all sessions or do all “required” activities; control group members sometimes find ways to participate in aspects of the intervention. As a result, there is often interest in estimating both the effect of being assigned to participate in the intervention, as well as the impact of actually participating and doing all of the required activities. Methods known broadly as “complier average causal effects” (CACE) or “instrumental variables” (IV) methods have been developed to estimate this latter effect, but they are more commonly applied in medical and treatment research. Since the use of these statistical techniques in prevention trials has been less widespread, many prevention scientists may not be familiar with the underlying assumptions and limitations of CACE and IV approaches. This paper provides an introduction to these methods, described in the context of randomized controlled trials of two preventive interventions: one for perinatal depression among at-risk women and the other for aggressive disruptive behavior in children. Through these case studies, the underlying assumptions and limitations of these methods are highlighted. PMID:18843535

  10. A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.

    PubMed

    Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping

    2017-01-30

    Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Common Characteristics of Improvisational Approaches in Music Therapy for Children with Autism Spectrum Disorder: Developing Treatment Guidelines.

    PubMed

    Geretsegger, Monika; Holck, Ulla; Carpente, John A; Elefant, Cochavit; Kim, Jinah; Gold, Christian

    2015-01-01

    Improvisational methods of music therapy have been increasingly applied in the treatment of individuals with autism spectrum disorder (ASD) over the past decades in many countries worldwide. This study aimed at developing treatment guidelines based on the most important common characteristics of improvisational music therapy (IMT) with children affected by ASD as applied across various countries and theoretical backgrounds. After initial development of treatment principle items, a survey among music therapy professionals in 10 countries and focus group workshops with experienced clinicians in three countries were conducted to evaluate the items and formulate revised treatment guidelines. To check usability, a treatment fidelity assessment tool was subsequently used to rate therapy excerpts. Survey findings and feedback from the focus groups corroborated most of the initial principles for IMT in the context of children with ASD. Unique and essential principles include facilitating musical and emotional attunement, musically scaffolding the flow of interaction, and tapping into the shared history of musical interaction between child and therapist. Raters successfully used the tool to evaluate treatment adherence and competence. Summarizing an international consensus about core principles of improvisational approaches in music therapy for children with ASD, these treatment guidelines may be applied in diverse theoretical models of music therapy. They can be used to assess treatment fidelity, and may be applied to facilitate future research, clinical practice, and training. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Applying Common Core Standards to Students with Disabilities in Music

    ERIC Educational Resources Information Center

    Darrow, Alice-Ann

    2014-01-01

    The following article includes general information on the Common Core State Standards, how the standards apply to the music and academic education of students with disabilities, and web resources that will helpful to music educators teaching students with and without disabilities.

  13. Two Paradoxes in Linear Regression Analysis

    PubMed Central

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  14. Native sulfur/chlorine SAD phasing for serial femtosecond crystallography.

    PubMed

    Nakane, Takanori; Song, Changyong; Suzuki, Mamoru; Nango, Eriko; Kobayashi, Jun; Masuda, Tetsuya; Inoue, Shigeyuki; Mizohata, Eiichi; Nakatsu, Toru; Tanaka, Tomoyuki; Tanaka, Rie; Shimamura, Tatsuro; Tono, Kensuke; Joti, Yasumasa; Kameshima, Takashi; Hatsui, Takaki; Yabashi, Makina; Nureki, Osamu; Iwata, So; Sugahara, Michihiro

    2015-12-01

    Serial femtosecond crystallography (SFX) allows structures to be determined with minimal radiation damage. However, phasing native crystals in SFX is not very common. Here, the structure determination of native lysozyme from single-wavelength anomalous diffraction (SAD) by utilizing the anomalous signal of sulfur and chlorine at a wavelength of 1.77 Å is successfully demonstrated. This sulfur SAD method can be applied to a wide range of proteins, which will improve the determination of native crystal structures.

  15. Molecular identification of Gd A- and Gd B- G6PD deficient variants by ARMS-PCR in a Tunisian population.

    PubMed

    Haloui, Sabrine; Laouini, Naouel; Sahli, Chaima Abdelhafidh; Daboubi, Rim; Becher, Mariem; Jouini, Latifa; Kazdaghli, Kalthoum; Tinsa, Faten; Cherif, Semia; Khemiri, Monia; Fredj, Sondess Hadj; Othmani, Rim; Ouali, Faida; Siala, Hajer; Toumi, Nour El Houda; Barsaoui, Sihem; Bibi, Amina; Messaoud, Taieb

    2016-01-01

    Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most common enzymopathy. More than 200 mutations in the G6PD gene have been described. In Tunisia, the A-African and the B-Mediterranean mutations predominate the mutational spectrum. The purpose of this study was to apply the amplification refractory mutation system (ARMS-PCR) to the identification of Gd A+, Gd A- and Gd B- variants in a cohort of deficient individuals and to establish a phenotype/genotype association. 90 subjects were screened for enzymatic deficiency by spectrophotometric assay. The molecular analyses were performed in a group of 50 unrelated patients. Of the 54 altered chromosomes examined, 60% had the Gd A- mutation, 18% showed the Gd B- mutation and in 20% of cases, no mutations have been identified. The ARMS-PCR showed complete concordance with the endonuclease cleavage reference method and agreed perfectly with previous Tunisian studies where Gd A- and Gd B- were the most encountered. Also, similarities in spectrum mutations with North African and Mediterranean countries suggest gene migration from Africa to Europe through Spain. In conclusion, ARMS has been introduced in this study for common G6PD alleles identification in Tunisia. It gives some advantages compared to the traditional endonuclease digestion method since it is more convenient and timesaving and also offers the possibility to be applied in mass screening surveys.

  16. SegAuth: A Segment-based Approach to Behavioral Biometric Authentication

    PubMed Central

    Li, Yanyan; Xie, Mengjun; Bian, Jiang

    2016-01-01

    Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214

  17. Lesion size estimator of cardiac radiofrequency ablation at different common locations with different tip temperatures.

    PubMed

    Lai, Yu-Chi; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G

    2004-10-01

    Finite element method (FEM) analysis has become a common method to analyze the lesion formation during temperature-controlled radiofrequency (RF) cardiac ablation. We present a process of FEM modeling a system including blood, myocardium, and an ablation catheter with a thermistor embedded at the tip. The simulation used a simple proportional-integral (PI) controller to control the entire process operated in temperature-controlled mode. Several factors affect the lesion size such as target temperature, blood flow rate, and application time. We simulated the time response of RF ablation at different locations by using different target temperatures. The applied sites were divided into two groups each with a different convective heat transfer coefficient. The first group was high-flow such as the atrioventricular (AV) node and the atrial aspect of the AV annulus, and the other was low-flow such as beneath the valve or inside the coronary sinus. Results showed the change of lesion depth and lesion width with time, under different conditions. We collected data for all conditions and used it to create a database. We implemented a user-interface, the lesion size estimator, where the user enters set temperature and location. Based on the database, the software estimated lesion dimensions during different applied durations. This software could be used as a first-step predictor to help the electrophysiologist choose treatment parameters.

  18. SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.

    PubMed

    Li, Yanyan; Xie, Mengjun; Bian, Jiang

    2016-10-01

    Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.

  19. Method of testing gear wheels in impact bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tikhonov, A.K.; Palagin, Y.M.

    1995-05-01

    Chemicothermal treatment processes are widely used in engineering to improve the working lives of important components, of which the most common is nitrocementation. That process has been applied at the Volga Automobile Plant mainly to sprockets in gear transmissions, which need high hardness and wear resistance in the surfaces with relatively ductile cores. Although various forms of chemicothermal treatment are widely used, there has been no universal method of evaluating the strengths of gear wheels. Standard methods of estimating strength ({sigma}{sub u}, {sigma}{sub t}, {sigma}{sub b}, and hardness) have a major shortcoming: They can determine only the characteristics of themore » cores for case-hardened materials. Here we consider a method of impact bending test, which enables one to evaluate the actual strength of gear teeth.« less

  20. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

Top