Sample records for identifying number model

  1. 26 CFR 301.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Identifying numbers. 301.6109-1 Section 301... numbers. (a) In general—(1) Taxpayer identifying numbers—(i) Principal types. There are several types of taxpayer identifying numbers that include the following: social security numbers, Internal Revenue Service...

  2. 26 CFR 301.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 18 2013-04-01 2013-04-01 false Identifying numbers. 301.6109-1 Section 301... numbers. (a) In general—(1) Taxpayer identifying numbers—(i) Principal types. There are several types of taxpayer identifying numbers that include the following: social security numbers, Internal Revenue Service...

  3. 26 CFR 301.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Identifying numbers. 301.6109-1 Section 301... numbers. (a) In general—(1) Taxpayer identifying numbers—(i) Principal types. There are several types of taxpayer identifying numbers that include the following: social security numbers, Internal Revenue Service...

  4. 26 CFR 301.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 18 2014-04-01 2014-04-01 false Identifying numbers. 301.6109-1 Section 301... numbers. (a) In general—(1) Taxpayer identifying numbers—(i) Principal types. There are several types of taxpayer identifying numbers that include the following: social security numbers, Internal Revenue Service...

  5. 26 CFR 1.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Identifying numbers. 1.6109-1 Section 1.6109-1...) INCOME TAXES Miscellaneous Provisions § 1.6109-1 Identifying numbers. (a) Information to be furnished after April 15, 1974. For provisions concerning the requesting and furnishing of identifying numbers...

  6. 26 CFR 41.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 16 2012-04-01 2012-04-01 false Identifying numbers. 41.6109-1 Section 41.6109... Application to Tax On Use of Certain Highway Motor Vehicles § 41.6109-1 Identifying numbers. Every person required under § 41.6011(a)-1 to make a return must provide the identifying number required by the...

  7. 26 CFR 41.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 16 2010-04-01 2010-04-01 true Identifying numbers. 41.6109-1 Section 41.6109-1... Application to Tax On Use of Certain Highway Motor Vehicles § 41.6109-1 Identifying numbers. Every person required under § 41.6011(a)-1 to make a return must provide the identifying number required by the...

  8. 26 CFR 41.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 16 2011-04-01 2011-04-01 false Identifying numbers. 41.6109-1 Section 41.6109... Application to Tax On Use of Certain Highway Motor Vehicles § 41.6109-1 Identifying numbers. Every person required under § 41.6011(a)-1 to make a return must provide the identifying number required by the...

  9. 26 CFR 41.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 16 2013-04-01 2013-04-01 false Identifying numbers. 41.6109-1 Section 41.6109... Application to Tax On Use of Certain Highway Motor Vehicles § 41.6109-1 Identifying numbers. Every person required under § 41.6011(a)-1 to make a return must provide the identifying number required by the...

  10. 26 CFR 1.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Identifying numbers. 1.6109-1 Section 1.6109-1...) INCOME TAXES (CONTINUED) Miscellaneous Provisions § 1.6109-1 Identifying numbers. (a) Information to be... numbers with respect to returns, statements, and other documents which must be filed after April 15, 1974...

  11. 26 CFR 1.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 13 2012-04-01 2012-04-01 false Identifying numbers. 1.6109-1 Section 1.6109-1...) INCOME TAXES (CONTINUED) Miscellaneous Provisions § 1.6109-1 Identifying numbers. (a) Information to be... numbers with respect to returns, statements, and other documents which must be filed after April 15, 1974...

  12. 26 CFR 1.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 13 2014-04-01 2014-04-01 false Identifying numbers. 1.6109-1 Section 1.6109-1...) INCOME TAXES (CONTINUED) Miscellaneous Provisions § 1.6109-1 Identifying numbers. (a) Information to be... numbers with respect to returns, statements, and other documents which must be filed after April 15, 1974...

  13. 26 CFR 1.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 13 2013-04-01 2013-04-01 false Identifying numbers. 1.6109-1 Section 1.6109-1...) INCOME TAXES (CONTINUED) Miscellaneous Provisions § 1.6109-1 Identifying numbers. (a) Information to be... numbers with respect to returns, statements, and other documents which must be filed after April 15, 1974...

  14. 12 CFR 210.27 - Reliance on identifying number.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 2 2012-01-01 2012-01-01 false Reliance on identifying number. 210.27 Section... J) Funds Transfers Through Fedwire § 210.27 Reliance on identifying number. (a) Reliance by a Federal Reserve Bank on number to identify an intermediary bank or beneficiary's bank. A Federal Reserve...

  15. 26 CFR 31.6109-1 - Supplying of identifying numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 15 2010-04-01 2010-04-01 false Supplying of identifying numbers. 31.6109-1... Subtitle F, Internal Revenue Code of 1954) § 31.6109-1 Supplying of identifying numbers. (a) In general... such identifying numbers as are required by each return, statement, or document and its related...

  16. 26 CFR 31.6109-1 - Supplying of identifying numbers.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 15 2014-04-01 2014-04-01 false Supplying of identifying numbers. 31.6109-1... Subtitle F, Internal Revenue Code of 1954) § 31.6109-1 Supplying of identifying numbers. (a) In general... such identifying numbers as are required by each return, statement, or document and its related...

  17. 26 CFR 31.6109-1 - Supplying of identifying numbers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 15 2013-04-01 2013-04-01 false Supplying of identifying numbers. 31.6109-1... Subtitle F, Internal Revenue Code of 1954) § 31.6109-1 Supplying of identifying numbers. (a) In general... such identifying numbers as are required by each return, statement, or document and its related...

  18. Identifying Fractions on a Number Line

    ERIC Educational Resources Information Center

    Wong, Monica

    2013-01-01

    Fractions are generally introduced to students using the part--whole model. Yet the number line is another important representation which can be used to build fraction concepts (Australian Curriculum Assessment and Reporting Authority [ACARA], 2012). Number lines are recognised as key in students' number development not only of fractions, but…

  19. Structural and Practical Identifiability Analysis of Zika Epidemiological Models.

    PubMed

    Tuncer, Necibe; Marctheva, Maia; LaBarre, Brian; Payoute, Sabrina

    2018-06-13

    The Zika virus (ZIKV) epidemic has caused an ongoing threat to global health security and spurred new investigations of the virus. Use of epidemiological models for arbovirus diseases can be a powerful tool to assist in prevention and control of the emerging disease. In this article, we introduce six models of ZIKV, beginning with a general vector-borne model and gradually including different transmission routes of ZIKV. These epidemiological models use various combinations of disease transmission (vector and direct) and infectious classes (asymptomatic and pregnant), with addition to loss of immunity being included. The disease-induced death rate is omitted from the models. We test the structural and practical identifiability of the models to find whether unknown model parameters can uniquely be determined. The models were fit to obtain time-series data of cumulative incidences and pregnant infections from the Florida Department of Health Daily Zika Update Reports. The average relative estimation errors (AREs) were computed from the Monte Carlo simulations to further analyze the identifiability of the models. We show that direct transmission rates are not practically identifiable; however, fixed recovery rates improve identifiability overall. We found ARE is low for each model (only slightly higher for those that account for a pregnant class) and help to confirm a reproduction number greater than one at the start of the Florida epidemic. Basic reproduction number, [Formula: see text], is an epidemiologically important threshold value which gives the number of secondary cases generated by one infected individual in a totally susceptible population in duration of infectiousness. Elasticity of the reproduction numbers suggests that the mosquito-to-human ratio, mosquito life span and biting rate have the greatest potential for reducing the reproduction number of Zika, and therefore, corresponding control measures need to be focused on.

  20. Nowcasting sunshine number using logistic modeling

    NASA Astrophysics Data System (ADS)

    Brabec, Marek; Badescu, Viorel; Paulescu, Marius

    2013-04-01

    In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.

  1. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  2. RUBIC identifies driver genes by detecting recurrent DNA copy number breaks

    PubMed Central

    van Dyk, Ewald; Hoogstraat, Marlous; ten Hoeve, Jelle; Reinders, Marcel J. T.; Wessels, Lodewyk F. A.

    2016-01-01

    The frequent recurrence of copy number aberrations across tumour samples is a reliable hallmark of certain cancer driver genes. However, state-of-the-art algorithms for detecting recurrent aberrations fail to detect several known drivers. In this study, we propose RUBIC, an approach that detects recurrent copy number breaks, rather than recurrently amplified or deleted regions. This change of perspective allows for a simplified approach as recursive peak splitting procedures and repeated re-estimation of the background model are avoided. Furthermore, we control the false discovery rate on the level of called regions, rather than at the probe level, as in competing algorithms. We benchmark RUBIC against GISTIC2 (a state-of-the-art approach) and RAIG (a recently proposed approach) on simulated copy number data and on three SNP6 and NGS copy number data sets from TCGA. We show that RUBIC calls more focal recurrent regions and identifies a much larger fraction of known cancer genes. PMID:27396759

  3. Ericksen number and Deborah number cascade predictions of a model for liquid crystalline polymers for simple shear flow

    NASA Astrophysics Data System (ADS)

    Klein, D. Harley; Leal, L. Gary; García-Cervera, Carlos J.; Ceniceros, Hector D.

    2007-02-01

    We consider the behavior of the Doi-Marrucci-Greco (DMG) model for nematic liquid crystalline polymers in planar shear flow. We found the DMG model to exhibit dynamics in both qualitative and quantitative agreement with experimental observations reported by Larson and Mead [Liq. Cryst. 15, 151 (1993)] for the Ericksen number and Deborah number cascades. For increasing shear rates within the Ericksen number cascade, the DMG model displays three distinct regimes: stable simple shear, stable roll cells, and irregular structure accompanied by disclination formation. In accordance with experimental observations, the model predicts both ±1 and ±1/2 disclinations. Although ±1 defects form via the ridge-splitting mechanism first identified by Feng, Tao, and Leal [J. Fluid Mech. 449, 179 (2001)], a new mechanism is identified for the formation of ±1/2 defects. Within the Deborah number cascade, with increasing Deborah number, the DMG model exhibits a streamwise banded texture, in the absence of disclinations and roll cells, followed by a monodomain wherein the mean orientation lies within the shear plane throughout the domain.

  4. The Relationship between Race and Students' Identified Career Role Models and Perceived Role Model Influence

    ERIC Educational Resources Information Center

    Karunanayake, Danesh; Nauta, Margaret M.

    2004-01-01

    The authors examined whether college students' race was related to the modal race of their identified career role models, the number of identified career role models, and their perceived influence from such models. Consistent with A. Bandura's (1977, 1986) social learning theory, students tended to have role models whose race was the same as…

  5. Global identifiability of linear compartmental models--a computer algebra algorithm.

    PubMed

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  6. Structural identifiability analysis of a cardiovascular system model.

    PubMed

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Identifying a Superfluid Reynolds Number via Dynamical Similarity.

    PubMed

    Reeves, M T; Billam, T P; Anderson, B P; Bradley, A S

    2015-04-17

    The Reynolds number provides a characterization of the transition to turbulent flow, with wide application in classical fluid dynamics. Identifying such a parameter in superfluid systems is challenging due to their fundamentally inviscid nature. Performing a systematic study of superfluid cylinder wakes in two dimensions, we observe dynamical similarity of the frequency of vortex shedding by a cylindrical obstacle. The universality of the turbulent wake dynamics is revealed by expressing shedding frequencies in terms of an appropriately defined superfluid Reynolds number, Re(s), that accounts for the breakdown of superfluid flow through quantum vortex shedding. For large obstacles, the dimensionless shedding frequency exhibits a universal form that is well-fitted by a classical empirical relation. In this regime the transition to turbulence occurs at Re(s)≈0.7, irrespective of obstacle width.

  8. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. 78 FR 58608 - Proposed Collection; Comment Request: Furnishing Identifying Number of Tax Return Preparer

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ...: Furnishing Identifying Number of Tax Return Preparer AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... IRS is soliciting comments concerning furnishing identifying number of tax return preparer. DATES.... ADDRESSES: Direct all written comments to Yvette Lawrence, Internal Revenue Service, Room 6129, 1111...

  10. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and

  11. Rare copy number variations in congenital heart disease patients identify unique genes in left-right patterning

    PubMed Central

    Fakhro, Khalid A.; Choi, Murim; Ware, Stephanie M.; Belmont, John W.; Towbin, Jeffrey A.; Lifton, Richard P.; Khokha, Mustafa K.; Brueckner, Martina

    2011-01-01

    Dominant human genetic diseases that impair reproductive fitness and have high locus heterogeneity constitute a problem for gene discovery because the usual criterion of finding more mutations in specific genes than expected by chance may require extremely large populations. Heterotaxy (Htx), a congenital heart disease resulting from abnormalities in left-right (LR) body patterning, has features suggesting that many cases fall into this category. In this setting, appropriate model systems may provide a means to support implication of specific genes. By high-resolution genotyping of 262 Htx subjects and 991 controls, we identify a twofold excess of subjects with rare genic copy number variations in Htx (14.5% vs. 7.4%, P = 1.5 × 10−4). Although 7 of 45 Htx copy number variations were large chromosomal abnormalities, 38 smaller copy number variations altered a total of 61 genes, 22 of which had Xenopus orthologs. In situ hybridization identified 7 of these 22 genes with expression in the ciliated LR organizer (gastrocoel roof plate), a marked enrichment compared with 40 of 845 previously studied genes (sevenfold enrichment, P < 10−6). Morpholino knockdown in Xenopus of Htx candidates demonstrated that five (NEK2, ROCK2, TGFBR2, GALNT11, and NUP188) strongly disrupted both morphological LR development and expression of pitx2, a molecular marker of LR patterning. These effects were specific, because 0 of 13 control genes from rare Htx or control copy number variations produced significant LR abnormalities (P = 0.001). These findings identify genes not previously implicated in LR patterning. PMID:21282601

  12. Rare copy number variations in congenital heart disease patients identify unique genes in left-right patterning.

    PubMed

    Fakhro, Khalid A; Choi, Murim; Ware, Stephanie M; Belmont, John W; Towbin, Jeffrey A; Lifton, Richard P; Khokha, Mustafa K; Brueckner, Martina

    2011-02-15

    Dominant human genetic diseases that impair reproductive fitness and have high locus heterogeneity constitute a problem for gene discovery because the usual criterion of finding more mutations in specific genes than expected by chance may require extremely large populations. Heterotaxy (Htx), a congenital heart disease resulting from abnormalities in left-right (LR) body patterning, has features suggesting that many cases fall into this category. In this setting, appropriate model systems may provide a means to support implication of specific genes. By high-resolution genotyping of 262 Htx subjects and 991 controls, we identify a twofold excess of subjects with rare genic copy number variations in Htx (14.5% vs. 7.4%, P = 1.5 × 10(-4)). Although 7 of 45 Htx copy number variations were large chromosomal abnormalities, 38 smaller copy number variations altered a total of 61 genes, 22 of which had Xenopus orthologs. In situ hybridization identified 7 of these 22 genes with expression in the ciliated LR organizer (gastrocoel roof plate), a marked enrichment compared with 40 of 845 previously studied genes (sevenfold enrichment, P < 10(-6)). Morpholino knockdown in Xenopus of Htx candidates demonstrated that five (NEK2, ROCK2, TGFBR2, GALNT11, and NUP188) strongly disrupted both morphological LR development and expression of pitx2, a molecular marker of LR patterning. These effects were specific, because 0 of 13 control genes from rare Htx or control copy number variations produced significant LR abnormalities (P = 0.001). These findings identify genes not previously implicated in LR patterning.

  13. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  14. Identifiability of PBPK Models with Applications to ...

    EPA Pesticide Factsheets

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  15. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  16. Structural Identifiability of Dynamic Systems Biology Models

    PubMed Central

    Villaverde, Alejandro F.

    2016-01-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726

  17. Identifying fMRI Model Violations with Lagrange Multiplier Tests

    PubMed Central

    Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor

    2013-01-01

    The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665

  18. 48 CFR 204.7107 - Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI). 204.7107 Section 204.7107 Federal... ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7107 Contract accounting classification...

  19. 48 CFR 204.7107 - Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI). 204.7107 Section 204.7107 Federal... ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7107 Contract accounting classification...

  20. Large-scale integrative network-based analysis identifies common pathways disrupted by copy number alterations across cancers

    PubMed Central

    2013-01-01

    Background Many large-scale studies analyzed high-throughput genomic data to identify altered pathways essential to the development and progression of specific types of cancer. However, no previous study has been extended to provide a comprehensive analysis of pathways disrupted by copy number alterations across different human cancers. Towards this goal, we propose a network-based method to integrate copy number alteration data with human protein-protein interaction networks and pathway databases to identify pathways that are commonly disrupted in many different types of cancer. Results We applied our approach to a data set of 2,172 cancer patients across 16 different types of cancers, and discovered a set of commonly disrupted pathways, which are likely essential for tumor formation in majority of the cancers. We also identified pathways that are only disrupted in specific cancer types, providing molecular markers for different human cancers. Analysis with independent microarray gene expression datasets confirms that the commonly disrupted pathways can be used to identify patient subgroups with significantly different survival outcomes. We also provide a network view of disrupted pathways to explain how copy number alterations affect pathways that regulate cell growth, cycle, and differentiation for tumorigenesis. Conclusions In this work, we demonstrated that the network-based integrative analysis can help to identify pathways disrupted by copy number alterations across 16 types of human cancers, which are not readily identifiable by conventional overrepresentation-based and other pathway-based methods. All the results and source code are available at http://compbio.cs.umn.edu/NetPathID/. PMID:23822816

  1. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    PubMed

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  2. Baryon number, lepton number, and operator dimension in the Standard Model

    DOE PAGES

    Kobach, Andrew

    2016-05-19

    In this study, we prove that for a given operator in the Standard Model (SM) with baryon number ΔB and lepton number ΔL, that the operator's dimension is even (odd) if (ΔB - ΔL)/2 is even (odd). Consequently, this establishes the veracity of statements that were long observed or expected to be true, but not proven, e.g., operators with ΔB - ΔL = 0 are of even dimension, ΔB - ΔL must be an even number, etc. These results remain true even if the SM is augmented by any number of right-handed neutrinos with ΔL = 1.

  3. Model selection for identifying power-law scaling.

    PubMed

    Ton, Robert; Daffertshofer, Andreas

    2016-08-01

    Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Identifiability Results for Several Classes of Linear Compartment Models.

    PubMed

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  5. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach

    PubMed Central

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research. PMID:27706185

  6. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    PubMed

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  7. Advanced Daily Prediction Model for National Suicide Numbers with Social Media Data.

    PubMed

    Lee, Kyung Sang; Lee, Hyewon; Myung, Woojae; Song, Gil-Young; Lee, Kihwang; Kim, Ho; Carroll, Bernard J; Kim, Doh Kwan

    2018-04-01

    Suicide is a significant public health concern worldwide. Social media data have a potential role in identifying high suicide risk individuals and also in predicting suicide rate at the population level. In this study, we report an advanced daily suicide prediction model using social media data combined with economic/meteorological variables along with observed suicide data lagged by 1 week. The social media data were drawn from weblog posts. We examined a total of 10,035 social media keywords for suicide prediction. We made predictions of national suicide numbers 7 days in advance daily for 2 years, based on a daily moving 5-year prediction modeling period. Our model predicted the likely range of daily national suicide numbers with 82.9% accuracy. Among the social media variables, words denoting economic issues and mood status showed high predictive strength. Observed number of suicides one week previously, recent celebrity suicide, and day of week followed by stock index, consumer price index, and sunlight duration 7 days before the target date were notable predictors along with the social media variables. These results strengthen the case for social media data to supplement classical social/economic/climatic data in forecasting national suicide events.

  8. Identify Fractions and Decimals on a Number Line

    ERIC Educational Resources Information Center

    Shaughnessy, Meghan M.

    2011-01-01

    Tasks that ask students to label rational number points on a number line are common not only in curricula in the upper elementary school grades but also on state assessments. Such tasks target foundational rational number concepts: A fraction (or a decimal) is more than a shaded part of an area, a part of a pizza, or a representation using…

  9. Advanced Daily Prediction Model for National Suicide Numbers with Social Media Data

    PubMed Central

    Lee, Kyung Sang; Lee, Hyewon; Myung, Woojae; Song, Gil-Young; Lee, Kihwang; Kim, Ho; Carroll, Bernard J.; Kim, Doh Kwan

    2018-01-01

    Objective Suicide is a significant public health concern worldwide. Social media data have a potential role in identifying high suicide risk individuals and also in predicting suicide rate at the population level. In this study, we report an advanced daily suicide prediction model using social media data combined with economic/meteorological variables along with observed suicide data lagged by 1 week. Methods The social media data were drawn from weblog posts. We examined a total of 10,035 social media keywords for suicide prediction. We made predictions of national suicide numbers 7 days in advance daily for 2 years, based on a daily moving 5-year prediction modeling period. Results Our model predicted the likely range of daily national suicide numbers with 82.9% accuracy. Among the social media variables, words denoting economic issues and mood status showed high predictive strength. Observed number of suicides one week previously, recent celebrity suicide, and day of week followed by stock index, consumer price index, and sunlight duration 7 days before the target date were notable predictors along with the social media variables. Conclusion These results strengthen the case for social media data to supplement classical social/economic/climatic data in forecasting national suicide events. PMID:29614852

  10. InterPred: A pipeline to identify and model protein-protein interactions.

    PubMed

    Mirabello, Claudio; Wallner, Björn

    2017-06-01

    Protein-protein interactions (PPI) are crucial for protein function. There exist many techniques to identify PPIs experimentally, but to determine the interactions in molecular detail is still difficult and very time-consuming. The fact that the number of PPIs is vastly larger than the number of individual proteins makes it practically impossible to characterize all interactions experimentally. Computational approaches that can bridge this gap and predict PPIs and model the interactions in molecular detail are greatly needed. Here we present InterPred, a fully automated pipeline that predicts and model PPIs from sequence using structural modeling combined with massive structural comparisons and molecular docking. A key component of the method is the use of a novel random forest classifier that integrate several structural features to distinguish correct from incorrect protein-protein interaction models. We show that InterPred represents a major improvement in protein-protein interaction detection with a performance comparable or better than experimental high-throughput techniques. We also show that our full-atom protein-protein complex modeling pipeline performs better than state of the art protein docking methods on a standard benchmark set. In addition, InterPred was also one of the top predictors in the latest CAPRI37 experiment. InterPred source code can be downloaded from http://wallnerlab.org/InterPred Proteins 2017; 85:1159-1170. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  12. Quantitative analysis of bristle number in Drosophila mutants identifies genes involved in neural development

    NASA Technical Reports Server (NTRS)

    Norga, Koenraad K.; Gurganus, Marjorie C.; Dilda, Christy L.; Yamamoto, Akihiko; Lyman, Richard F.; Patel, Prajal H.; Rubin, Gerald M.; Hoskins, Roger A.; Mackay, Trudy F.; Bellen, Hugo J.

    2003-01-01

    BACKGROUND: The identification of the function of all genes that contribute to specific biological processes and complex traits is one of the major challenges in the postgenomic era. One approach is to employ forward genetic screens in genetically tractable model organisms. In Drosophila melanogaster, P element-mediated insertional mutagenesis is a versatile tool for the dissection of molecular pathways, and there is an ongoing effort to tag every gene with a P element insertion. However, the vast majority of P element insertion lines are viable and fertile as homozygotes and do not exhibit obvious phenotypic defects, perhaps because of the tendency for P elements to insert 5' of transcription units. Quantitative genetic analysis of subtle effects of P element mutations that have been induced in an isogenic background may be a highly efficient method for functional genome annotation. RESULTS: Here, we have tested the efficacy of this strategy by assessing the extent to which screening for quantitative effects of P elements on sensory bristle number can identify genes affecting neural development. We find that such quantitative screens uncover an unusually large number of genes that are known to function in neural development, as well as genes with yet uncharacterized effects on neural development, and novel loci. CONCLUSIONS: Our findings establish the use of quantitative trait analysis for functional genome annotation through forward genetics. Similar analyses of quantitative effects of P element insertions will facilitate our understanding of the genes affecting many other complex traits in Drosophila.

  13. Genome-wide significant localization for working and spatial memory: Identifying genes for psychosis using models of cognition.

    PubMed

    Knowles, Emma E M; Carless, Melanie A; de Almeida, Marcio A A; Curran, Joanne E; McKay, D Reese; Sprooten, Emma; Dyer, Thomas D; Göring, Harald H; Olvera, Rene; Fox, Peter; Almasy, Laura; Duggirala, Ravi; Kent, Jack W; Blangero, John; Glahn, David C

    2014-01-01

    It is well established that risk for developing psychosis is largely mediated by the influence of genes, but identifying precisely which genes underlie that risk has been problematic. Focusing on endophenotypes, rather than illness risk, is one solution to this problem. Impaired cognition is a well-established endophenotype of psychosis. Here we aimed to characterize the genetic architecture of cognition using phenotypically detailed models as opposed to relying on general IQ or individual neuropsychological measures. In so doing we hoped to identify genes that mediate cognitive ability, which might also contribute to psychosis risk. Hierarchical factor models of genetically clustered cognitive traits were subjected to linkage analysis followed by QTL region-specific association analyses in a sample of 1,269 Mexican American individuals from extended pedigrees. We identified four genome wide significant QTLs, two for working and two for spatial memory, and a number of plausible and interesting candidate genes. The creation of detailed models of cognition seemingly enhanced the power to detect genetic effects on cognition and provided a number of possible candidate genes for psychosis. © 2013 Wiley Periodicals, Inc.

  14. Practical identifiability analysis of a minimal cardiovascular system model.

    PubMed

    Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas

    2017-01-17

    Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.

  15. A Computational Framework for Identifiability and Ill-Conditioning Analysis of Lithium-Ion Battery Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López C, Diana C.; Wozny, Günter; Flores-Tlacuahuac, Antonio

    2016-03-23

    The lack of informative experimental data and the complexity of first-principles battery models make the recovery of kinetic, transport, and thermodynamic parameters complicated. We present a computational framework that combines sensitivity, singular value, and Monte Carlo analysis to explore how different sources of experimental data affect parameter structural ill conditioning and identifiability. Our study is conducted on a modified version of the Doyle-Fuller-Newman model. We demonstrate that the use of voltage discharge curves only enables the identification of a small parameter subset, regardless of the number of experiments considered. Furthermore, we show that the inclusion of a single electrolyte concentrationmore » measurement significantly aids identifiability and mitigates ill-conditioning.« less

  16. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  17. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  18. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Lepton number violation in theories with a large number of standard model copies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-03-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less

  20. Constrained minimization problems for the reproduction number in meta-population models.

    PubMed

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  1. Modeling number of claims and prediction of total claim amount

    NASA Astrophysics Data System (ADS)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  2. MADGiC: a model-based approach for identifying driver genes in cancer

    PubMed Central

    Korthauer, Keegan D.; Kendziorski, Christina

    2015-01-01

    Motivation: Identifying and prioritizing somatic mutations is an important and challenging area of cancer research that can provide new insights into gene function as well as new targets for drug development. Most methods for prioritizing mutations rely primarily on frequency-based criteria, where a gene is identified as having a driver mutation if it is altered in significantly more samples than expected according to a background model. Although useful, frequency-based methods are limited in that all mutations are treated equally. It is well known, however, that some mutations have no functional consequence, while others may have a major deleterious impact. The spatial pattern of mutations within a gene provides further insight into their functional consequence. Properly accounting for these factors improves both the power and accuracy of inference. Also important is an accurate background model. Results: Here, we develop a Model-based Approach for identifying Driver Genes in Cancer (termed MADGiC) that incorporates both frequency and functional impact criteria and accommodates a number of factors to improve the background model. Simulation studies demonstrate advantages of the approach, including a substantial increase in power over competing methods. Further advantages are illustrated in an analysis of ovarian and lung cancer data from The Cancer Genome Atlas (TCGA) project. Availability and implementation: R code to implement this method is available at http://www.biostat.wisc.edu/ kendzior/MADGiC/. Contact: kendzior@biostat.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25573922

  3. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  4. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  5. Critical flavor number of the Thirring model in three dimensions

    NASA Astrophysics Data System (ADS)

    Wellegehausen, Björn H.; Schmidt, Daniel; Wipf, Andreas

    2017-11-01

    The Thirring model is a four-fermion theory with a current-current interaction and U (2 N ) chiral symmetry. It is closely related to three-dimensional QED and other models used to describe properties of graphene. In addition, it serves as a toy model to study chiral symmetry breaking. In the limit of flavor number N →1 /2 it is equivalent to the Gross-Neveu model, which shows a parity-breaking discrete phase transition. The model was already studied with different methods, including Dyson-Schwinger equations, functional renormalization group methods, and lattice simulations. Most studies agree that there is a phase transition from a symmetric phase to a spontaneously broken phase for a small number of fermion flavors, but no symmetry breaking for large N . But there is no consensus on the critical flavor number Ncr above which there is no phase transition anymore and on further details of the critical behavior. Values of N found in the literature vary between 2 and 7. All earlier lattice studies were performed with staggered fermions. Thus it is questionable if in the continuum limit the lattice model recovers the internal symmetries of the continuum model. We present new results from lattice Monte Carlo simulations of the Thirring model with SLAC fermions which exactly implement all internal symmetries of the continuum model even at finite lattice spacing. If we reformulate the model in an irreducible representation of the Clifford algebra, we find, in contradiction to earlier results, that the behavior for even and odd flavor numbers is very different: for even flavor numbers, chiral and parity symmetry are always unbroken; for odd flavor numbers, parity symmetry is spontaneously broken below the critical flavor number Nircr=9 , while chiral symmetry is still unbroken.

  6. Identifying genetic loci affecting antidepressant drug response in depression using drug–gene interaction models

    PubMed Central

    Noordam, Raymond; Avery, Christy L; Visser, Loes E; Stricker, Bruno H

    2016-01-01

    Antidepressants are often only moderately successful in decreasing the severity of depressive symptoms. In part, antidepressant treatment response in patients with depression is genetically determined. However, although a large number of studies have been conducted aiming to identify genetic variants associated with antidepressant drug response in depression, only a few variants have been repeatedly identified. Within the present review, we will discuss the methodological challenges and limitations of the studies that have been conducted on this topic to date (e.g., ‘treated-only design’, statistical power) and we will discuss how specifically drug–gene interaction models can be used to be better able to identify genetic variants associated with antidepressant drug response in depression. PMID:27248517

  7. Stochastic modeling of sunshine number data

    NASA Astrophysics Data System (ADS)

    Brabec, Marek; Paulescu, Marius; Badescu, Viorel

    2013-11-01

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  8. Stochastic modeling of sunshine number data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Marek, E-mail: mbrabec@cs.cas.cz; Paulescu, Marius; Badescu, Viorel

    2013-11-13

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation ofmore » Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the

  9. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in

  10. 26 CFR 301.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... §§ 1.671-4(b)(4) of this chapter. (6) Effective date. Paragraphs (a)(3), (4), and (5) of this section...) of this chapter, any college or university that is an educational organization as defined in § 1.501... must have an employer identification number for use in any communication with the Internal Revenue...

  11. A Low Mach Number Model for Moist Atmospheric Flows

    DOE PAGES

    Duarte, Max; Almgren, Ann S.; Bell, John B.

    2015-04-01

    A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less

  12. A Low Mach Number Model for Moist Atmospheric Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Max; Almgren, Ann S.; Bell, John B.

    A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less

  13. Meta-analysis identifies 29 additional ulcerative colitis risk loci, increasing the number of confirmed associations to 47.

    PubMed

    Anderson, Carl A; Boucher, Gabrielle; Lees, Charlie W; Franke, Andre; D'Amato, Mauro; Taylor, Kent D; Lee, James C; Goyette, Philippe; Imielinski, Marcin; Latiano, Anna; Lagacé, Caroline; Scott, Regan; Amininejad, Leila; Bumpstead, Suzannah; Baidoo, Leonard; Baldassano, Robert N; Barclay, Murray; Bayless, Theodore M; Brand, Stephan; Büning, Carsten; Colombel, Jean-Frédéric; Denson, Lee A; De Vos, Martine; Dubinsky, Marla; Edwards, Cathryn; Ellinghaus, David; Fehrmann, Rudolf S N; Floyd, James A B; Florin, Timothy; Franchimont, Denis; Franke, Lude; Georges, Michel; Glas, Jürgen; Glazer, Nicole L; Guthery, Stephen L; Haritunians, Talin; Hayward, Nicholas K; Hugot, Jean-Pierre; Jobin, Gilles; Laukens, Debby; Lawrance, Ian; Lémann, Marc; Levine, Arie; Libioulle, Cecile; Louis, Edouard; McGovern, Dermot P; Milla, Monica; Montgomery, Grant W; Morley, Katherine I; Mowat, Craig; Ng, Aylwin; Newman, William; Ophoff, Roel A; Papi, Laura; Palmieri, Orazio; Peyrin-Biroulet, Laurent; Panés, Julián; Phillips, Anne; Prescott, Natalie J; Proctor, Deborah D; Roberts, Rebecca; Russell, Richard; Rutgeerts, Paul; Sanderson, Jeremy; Sans, Miquel; Schumm, Philip; Seibold, Frank; Sharma, Yashoda; Simms, Lisa A; Seielstad, Mark; Steinhart, A Hillary; Targan, Stephan R; van den Berg, Leonard H; Vatn, Morten; Verspaget, Hein; Walters, Thomas; Wijmenga, Cisca; Wilson, David C; Westra, Harm-Jan; Xavier, Ramnik J; Zhao, Zhen Z; Ponsioen, Cyriel Y; Andersen, Vibeke; Torkvist, Leif; Gazouli, Maria; Anagnou, Nicholas P; Karlsen, Tom H; Kupcinskas, Limas; Sventoraityte, Jurgita; Mansfield, John C; Kugathasan, Subra; Silverberg, Mark S; Halfvarson, Jonas; Rotter, Jerome I; Mathew, Christopher G; Griffiths, Anne M; Gearry, Richard; Ahmad, Tariq; Brant, Steven R; Chamaillard, Mathias; Satsangi, Jack; Cho, Judy H; Schreiber, Stefan; Daly, Mark J; Barrett, Jeffrey C; Parkes, Miles; Annese, Vito; Hakonarson, Hakon; Radford-Smith, Graham; Duerr, Richard H; Vermeire, Séverine; Weersma, Rinse K; Rioux, John D

    2011-03-01

    Genome-wide association studies and candidate gene studies in ulcerative colitis have identified 18 susceptibility loci. We conducted a meta-analysis of six ulcerative colitis genome-wide association study datasets, comprising 6,687 cases and 19,718 controls, and followed up the top association signals in 9,628 cases and 12,917 controls. We identified 29 additional risk loci (P < 5 × 10(-8)), increasing the number of ulcerative colitis-associated loci to 47. After annotating associated regions using GRAIL, expression quantitative trait loci data and correlations with non-synonymous SNPs, we identified many candidate genes that provide potentially important insights into disease pathogenesis, including IL1R2, IL8RA-IL8RB, IL7R, IL12B, DAP, PRDM1, JAK2, IRF5, GNA12 and LSP1. The total number of confirmed inflammatory bowel disease risk loci is now 99, including a minimum of 28 shared association signals between Crohn's disease and ulcerative colitis.

  14. Identifying Potential Regions of Copy Number Variation for Bipolar Disorder

    PubMed Central

    Chen, Yi-Hsuan; Lu, Ru-Band; Hung, Hung; Kuo, Po-Hsiu

    2014-01-01

    Bipolar disorder is a complex psychiatric disorder with high heritability, but its genetic determinants are still largely unknown. Copy number variation (CNV) is one of the sources to explain part of the heritability. However, it is a challenge to estimate discrete values of the copy numbers using continuous signals calling from a set of markers, and to simultaneously perform association testing between CNVs and phenotypic outcomes. The goal of the present study is to perform a series of data filtering and analysis procedures using a DNA pooling strategy to identify potential CNV regions that are related to bipolar disorder. A total of 200 normal controls and 200 clinically diagnosed bipolar patients were recruited in this study, and were randomly divided into eight control and eight case pools. Genome-wide genotyping was employed using Illumina Human Omni1-Quad array with approximately one million markers for CNV calling. We aimed at setting a series of criteria to filter out the signal noise of marker data and to reduce the chance of false-positive findings for CNV regions. We first defined CNV regions for each pool. Potential CNV regions were reported based on the different patterns of CNV status between cases and controls. Genes that were mapped into the potential CNV regions were examined with association testing, Gene Ontology enrichment analysis, and checked with existing literature for their associations with bipolar disorder. We reported several CNV regions that are related to bipolar disorder. Two CNV regions on chromosome 11 and 22 showed significant signal differences between cases and controls (p < 0.05). Another five CNV regions on chromosome 6, 9, and 19 were overlapped with results in previous CNV studies. Experimental validation of two CNV regions lent some support to our reported findings. Further experimental and replication studies could be designed for these selected regions. PMID:27605030

  15. Turbulence Model Behavior in Low Reynolds Number Regions of Aerodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Spalart, Philippe R.

    2008-01-01

    The behaviors of the widely-used Spalart-Allmaras (SA) and Menter shear-stress transport (SST) turbulence models at low Reynolds numbers and under conditions conducive to relaminarization are documented. The flows used in the investigation include 2-D zero pressure gradient flow over a flat plate from subsonic to hypersonic Mach numbers, 2-D airfoil flow from subsonic to supersonic Mach numbers, 2-D subsonic sink-flow, and 3-D subsonic flow over an infinite swept wing (particularly its leading-edge region). Both models exhibit a range over which they behave transitionally in the sense that the flow is neither laminar nor fully turbulent, but these behaviors are different: the SST model typically has a well-defined transition location, whereas the SA model does not. Both models are predisposed to delayed activation of turbulence with increasing freestream Mach number. Also, both models can be made to achieve earlier activation of turbulence by increasing their freestream levels, but too high a level can disturb the turbulent solution behavior. The technique of maintaining freestream levels of turbulence without decay in the SST model, introduced elsewhere, is shown here to be useful in reducing grid-dependence of the model's transitional behavior. Both models are demonstrated to be incapable of predicting relaminarization; eddy viscosities remain weakly turbulent in accelerating or laterally-strained boundary layers for which experiment and direct simulations indicate turbulence suppression. The main conclusion is that these models are intended for fully turbulent high Reynolds number computations, and using them for transitional (e.g., low Reynolds number) or relaminarizing flows is not appropriate.

  16. Turbulence Model Behavior in Low Reynolds Number Regions of Aerodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Spalart, Philippe R.

    2008-01-01

    The behaviors of the widely-used Spalart-Allmaras (SA) and Menter shear-stress transport (SST) turbulence models at low Reynolds numbers and under conditions conducive to relaminarization are documented. The flows used in the investigation include 2-D zero pressure gradient flow over a flat plate from subsonic to hypersonic Mach numbers, 2-D airfoil flow from subsonic to supersonic Mach numbers, 2-D subsonic sink-flow, and 3-D subsonic flow over an infinite swept wing (particularly its leading-edge region). Both models exhibit a range over which they behave 'transitionally' in the sense that the flow is neither laminar nor fully turbulent, but these behaviors are different: the SST model typically has a well-defined transition location, whereas the SA model does not. Both models are predisposed to delayed activation of turbulence with increasing freestream Mach number. Also, both models can be made to achieve earlier activation of turbulence by increasing their freestream levels, but too high a level can disturb the turbulent solution behavior. The technique of maintaining freestream levels of turbulence without decay in the SST model, introduced elsewhere, is shown here to be useful in reducing grid-dependence of the model's transitional behavior. Both models are demonstrated to be incapable of predicting relaminarization; eddy viscosities remain weakly turbulent in accelerating or laterally-strained boundary layers for which experiment and direct simulations indicate turbulence suppression. The main conclusion is that these models are intended for fully turbulent high Reynolds number computations, and using them for transitional (e.g., low Reynolds number) or relaminarizing flows is not appropriate.

  17. Integrated genome-wide DNA copy number and expression analysis identifies distinct mechanisms of primary chemoresistance in ovarian carcinomas.

    PubMed

    Etemadmoghadam, Dariush; deFazio, Anna; Beroukhim, Rameen; Mermel, Craig; George, Joshy; Getz, Gad; Tothill, Richard; Okamoto, Aikou; Raeder, Maria B; Harnett, Paul; Lade, Stephen; Akslen, Lars A; Tinker, Anna V; Locandro, Bianca; Alsop, Kathryn; Chiew, Yoke-Eng; Traficante, Nadia; Fereday, Sian; Johnson, Daryl; Fox, Stephen; Sellers, William; Urashima, Mitsuyoshi; Salvesen, Helga B; Meyerson, Matthew; Bowtell, David

    2009-02-15

    A significant number of women with serous ovarian cancer are intrinsically refractory to platinum-based treatment. We analyzed somatic DNA copy number variation and gene expression data to identify key mechanisms associated with primary resistance in advanced-stage serous cancers. Genome-wide copy number variation was measured in 118 ovarian tumors using high-resolution oligonucleotide microarrays. A well-defined subset of 85 advanced-stage serous tumors was then used to relate copy number variation to primary resistance to treatment. The discovery-based approach was complemented by quantitative-PCR copy number analysis of 12 candidate genes as independent validation of previously reported associations with clinical outcome. Likely copy number variation targets and tumor molecular subtypes were further characterized by gene expression profiling. Amplification of 19q12, containing cyclin E (CCNE1), and 20q11.22-q13.12, mapping immediately adjacent to the steroid receptor coactivator NCOA3, was significantly associated with poor response to primary treatment. Other genes previously associated with copy number variation and clinical outcome in ovarian cancer were not associated with primary treatment resistance. Chemoresistant tumors with high CCNE1 copy number and protein expression were associated with increased cellular proliferation but so too was a subset of treatment-responsive patients, suggesting a cell-cycle independent role for CCNE1 in modulating chemoresponse. Patients with a poor clinical outcome without CCNE1 amplification overexpressed genes involved in extracellular matrix deposition. We have identified two distinct mechanisms of primary treatment failure in serous ovarian cancer, involving CCNE1 amplification and enhanced extracellular matrix deposition. CCNE1 copy number is validated as a dominant marker of patient outcome in ovarian cancer.

  18. ON IDENTIFIABILITY OF NONLINEAR ODE MODELS AND APPLICATIONS IN VIRAL DYNAMICS

    PubMed Central

    MIAO, HONGYU; XIA, XIAOHUA; PERELSON, ALAN S.; WU, HULIN

    2011-01-01

    Ordinary differential equations (ODE) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last 2 decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determing unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiability analysis, practical identifiability analysis and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV, influenza and hepatitis viruses are given to illustrate how to apply these identifiability analysis methods in practice. PMID:21785515

  19. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  20. Evaluating predictive models for solar energy growth in the US states and identifying the key drivers

    NASA Astrophysics Data System (ADS)

    Chakraborty, Joheen; Banerji, Sugata

    2018-03-01

    Driven by a desire to control climate change and reduce the dependence on fossil fuels, governments around the world are increasing the adoption of renewable energy sources. However, among the US states, we observe a wide disparity in renewable penetration. In this study, we have identified and cleaned over a dozen datasets representing solar energy penetration in each US state, and the potentially relevant socioeconomic and other factors that may be driving the growth in solar. We have applied a number of predictive modeling approaches - including machine learning and regression - on these datasets over a 17-year period and evaluated the relative performance of the models. Our goals were: (1) identify the most important factors that are driving the growth in solar, (2) choose the most effective predictive modeling technique for solar growth, and (3) develop a model for predicting next year’s solar growth using this year’s data. We obtained very promising results with random forests (about 90% efficacy) and varying degrees of success with support vector machines and regression techniques (linear, polynomial, ridge). We also identified states with solar growth slower than expected and representing a potential for stronger growth in future.

  1. Refined open intersection numbers and the Kontsevich-Penner matrix model

    NASA Astrophysics Data System (ADS)

    Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.

    2017-03-01

    A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.

  2. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    NASA Astrophysics Data System (ADS)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual

  3. Identifying Opportunities for Grade One Children to Acquire Foundational Number Sense: Developing a Framework for Cross Cultural Classroom Analyses

    ERIC Educational Resources Information Center

    Andrews, Paul; Sayers, Judy

    2015-01-01

    It is known that an appropriately developed foundational number sense (FONS), or the ability to operate flexibly with number and quantity, is a powerful predictor of young children's later mathematical achievement. However, until now not only has FONS been definitionally elusive but instruments for identifying opportunities for children to acquire…

  4. Turbulence Model Selection for Low Reynolds Number Flows

    PubMed Central

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil’s surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  5. Turbulence Model Selection for Low Reynolds Number Flows.

    PubMed

    Aftab, S M A; Mohd Rafie, A S; Razak, N A; Ahmad, K A

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil's surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail.

  6. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare

  7. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  8. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Using risk-adjustment models to identify high-cost risks.

    PubMed

    Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J

    2003-11-01

    We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.

  10. Image analysis-based modelling for flower number estimation in grapevine.

    PubMed

    Millan, Borja; Aquino, Arturo; Diago, Maria P; Tardaguila, Javier

    2017-02-01

    Grapevine flower number per inflorescence provides valuable information that can be used for assessing yield. Considerable research has been conducted at developing a technological tool, based on image analysis and predictive modelling. However, the behaviour of variety-independent predictive models and yield prediction capabilities on a wide set of varieties has never been evaluated. Inflorescence images from 11 grapevine Vitis vinifera L. varieties were acquired under field conditions. The flower number per inflorescence and the flower number visible in the images were calculated manually, and automatically using an image analysis algorithm. These datasets were used to calibrate and evaluate the behaviour of two linear (single-variable and multivariable) and a nonlinear variety-independent model. As a result, the integrated tool composed of the image analysis algorithm and the nonlinear approach showed the highest performance and robustness (RPD = 8.32, RMSE = 37.1). The yield estimation capabilities of the flower number in conjunction with fruit set rate (R 2  = 0.79) and average berry weight (R 2  = 0.91) were also tested. This study proves the accuracy of flower number per inflorescence estimation using an image analysis algorithm and a nonlinear model that is generally applicable to different grapevine varieties. This provides a fast, non-invasive and reliable tool for estimation of yield at harvest. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  11. The performance of discrete models of low Reynolds number swimmers.

    PubMed

    Wang, Qixuan; Othmer, Hans G

    2015-12-01

    Swimming by shape changes at low Reynolds number is widely used in biology and understanding how the performance of movement depends on the geometric pattern of shape changes is important to understand swimming of microorganisms and in designing low Reynolds number swimming models. The simplest models of shape changes are those that comprise a series of linked spheres that can change their separation and/or their size. Herein we compare the performance of three models in which these modes are used in different ways.

  12. Flow through collapsible tubes at low Reynolds numbers. Applicability of the waterfall model.

    PubMed

    Lyon, C K; Scott, J B; Wang, C Y

    1980-07-01

    The applicability of the waterfall model was tested using the Starling resistor and different viscosities of fluids to vary the Reynolds number. The waterfall model proved adequate to describe flow in the Starling resistor model only at very low Reynolds numbers (Reynolds number less than 1). Blood flow characterized by such low Reynolds numbers occurs only in the microvasculature. Thus, it is inappropriate to apply the waterfall model indiscriminately to flow through large collapsible veins.

  13. [Projection of prisoner numbers].

    PubMed

    Metz, Rainer; Sohn, Werner

    2015-01-01

    The past and future development of occupancy rates in prisons is of crucial importance for the judicial administration of every country. Basic factors for planning the required penal facilities are seasonal fluctuations, minimum, maximum and average occupancy as well as the present situation and potential development of certain imprisonment categories. As the prisoner number of a country is determined by a complex set of interdependent conditions, it has turned out to be difficult to provide any theoretical explanations. The idea accepted in criminology for a long time that prisoner numbers are interdependent with criminal policy must be regarded as having failed. Statistical and time series analyses may help, however, to identify the factors having influenced the development of prisoner numbers in the past. The analyses presented here, first describe such influencing factors from a criminological perspective and then deal with their statistical identification and modelling. Using the development of prisoner numbers in Hesse as an example, it has been found that modelling methods in which the independent variables predict the dependent variable with a time lag are particularly helpful. A potential complication is, however, that for predicting the number of prisoners the different dynamics in German and foreign prisoners require the development of further models.

  14. TAGCNA: A Method to Identify Significant Consensus Events of Copy Number Alterations in Cancer

    PubMed Central

    Yuan, Xiguo; Zhang, Junying; Yang, Liying; Zhang, Shengli; Chen, Baodi; Geng, Yaojun; Wang, Yue

    2012-01-01

    Somatic copy number alteration (CNA) is a common phenomenon in cancer genome. Distinguishing significant consensus events (SCEs) from random background CNAs in a set of subjects has been proven to be a valuable tool to study cancer. In order to identify SCEs with an acceptable type I error rate, better computational approaches should be developed based on reasonable statistics and null distributions. In this article, we propose a new approach named TAGCNA for identifying SCEs in somatic CNAs that may encompass cancer driver genes. TAGCNA employs a peel-off permutation scheme to generate a reasonable null distribution based on a prior step of selecting tag CNA markers from the genome being considered. We demonstrate the statistical power of TAGCNA on simulated ground truth data, and validate its applicability using two publicly available cancer datasets: lung and prostate adenocarcinoma. TAGCNA identifies SCEs that are known to be involved with proto-oncogenes (e.g. EGFR, CDK4) and tumor suppressor genes (e.g. CDKN2A, CDKN2B), and provides many additional SCEs with potential biological relevance in these data. TAGCNA can be used to analyze the significance of CNAs in various cancers. It is implemented in R and is freely available at http://tagcna.sourceforge.net/. PMID:22815924

  15. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  16. Physical Samples and Persistent Identifiers: The Implementation of the International Geo Sample Number (IGSN) Registration Service in CSIRO, Australia

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan

    2016-04-01

    Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the

  17. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  18. On the Reproduction Number of a Gut Microbiota Model.

    PubMed

    Barril, Carles; Calsina, Àngel; Ripoll, Jordi

    2017-11-01

    A spatially structured linear model of the growth of intestinal bacteria is analysed from two generational viewpoints. Firstly, the basic reproduction number associated with the bacterial population, i.e. the expected number of daughter cells per bacterium, is given explicitly in terms of biological parameters. Secondly, an alternative quantity is introduced based on the number of bacteria produced within the intestine by one bacterium originally in the external media. The latter depends on the parameters in a simpler way and provides more biological insight than the standard reproduction number, allowing the design of experimental procedures. Both quantities coincide and are equal to one at the extinction threshold, below which the bacterial population becomes extinct. Optimal values of both reproduction numbers are derived assuming parameter trade-offs.

  19. The Landscape of Somatic Chromosomal Copy Number Aberrations in GEM Models of Prostate Carcinoma

    PubMed Central

    Bianchi-Frias, Daniella; Hernandez, Susana A.; Coleman, Roger; Wu, Hong; Nelson, Peter S.

    2015-01-01

    Human prostate cancer (PCa) is known to harbor recurrent genomic aberrations consisting of chromosomal losses, gains, rearrangements and mutations that involve oncogenes and tumor suppressors. Genetically engineered mouse (GEM) models have been constructed to assess the causal role of these putative oncogenic events and provide molecular insight into disease pathogenesis. While GEM models generally initiate neoplasia by manipulating a single gene, expression profiles of GEM tumors typically comprise hundreds of transcript alterations. It is unclear whether these transcriptional changes represent the pleiotropic effects of single oncogenes, and/or cooperating genomic or epigenomic events. Therefore, it was determined if structural chromosomal alterations occur in GEM models of PCa and whether the changes are concordant with human carcinomas. Whole genome array-based comparative genomic hybridization (CGH) was used to identify somatic chromosomal copy number aberrations (SCNAs) in the widely used TRAMP, Hi-Myc, Pten-null and LADY GEM models. Interestingly, very few SCNAs were identified and the genomic architecture of Hi-Myc, Pten-null and LADY tumors were essentially identical to the germline. TRAMP neuroendocrine carcinomas contained SCNAs, which comprised three recurrent aberrations including a single copy loss of chromosome 19 (encoding Pten). In contrast, cell lines derived from the TRAMP, Hi-Myc, and Pten-null tumors were notable for numerous SCNAs that included copy gains of chromosome 15 (encoding Myc) and losses of chromosome 11 (encoding p53). PMID:25298407

  20. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  1. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  2. Rodent Models of Experimental Endometriosis: Identifying Mechanisms of Disease and Therapeutic Targets

    PubMed Central

    Bruner-Tran, Kaylon L.; Mokshagundam, Shilpa; Herington, Jennifer L.; Ding, Tianbing; Osteen, Kevin G.

    2018-01-01

    Background: Although it has been more than a century since endometriosis was initially described in the literature, understanding the etiology and natural history of the disease has been challenging. However, the broad utility of murine and rat models of experimental endometriosis has enabled the elucidation of a number of potentially targetable processes which may otherwise promote this disease. Objective: To review a variety of studies utilizing rodent models of endometriosis to illustrate their utility in examining mechanisms associated with development and progression of this disease. Results: Use of rodent models of endometriosis has provided a much broader understanding of the risk factors for the initial development of endometriosis, the cellular pathology of the disease and the identification of potential therapeutic targets. Conclusion: Although there are limitations with any animal model, the variety of experimental endometriosis models that have been developed has enabled investigation into numerous aspects of this disease. Thanks to these models, our under-standing of the early processes of disease development, the role of steroid responsiveness, inflammatory processes and the peritoneal environment has been advanced. More recent models have begun to shed light on how epigenetic alterations con-tribute to the molecular basis of this disease as well as the multiple comorbidities which plague many patients. Continued de-velopments of animal models which aid in unraveling the mechanisms of endometriosis development provide the best oppor-tunity to identify therapeutic strategies to prevent or regress this enigmatic disease.

  3. Number-Knower Levels in Young Children: Insights from Bayesian Modeling

    ERIC Educational Resources Information Center

    Lee, Michael D.; Sarnecka, Barbara W.

    2011-01-01

    Lee and Sarnecka (2010) developed a Bayesian model of young children's behavior on the Give-N test of number knowledge. This paper presents two new extensions of the model, and applies the model to new data. In the first extension, the model is used to evaluate competing theories about the conceptual knowledge underlying children's behavior. One,…

  4. An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.

    PubMed

    Saccomani, Maria Pia

    2011-08-01

    Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.

  5. Modeling the number of car theft using Poisson regression

    NASA Astrophysics Data System (ADS)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  6. Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?

    PubMed

    Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P

    2018-04-01

    What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published

  7. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  8. Modeling and identifying the sources of radiocesium contamination in separate sewerage systems.

    PubMed

    Pratama, Mochamad Adhiraga; Yoneda, Minoru; Yamashiki, Yosuke; Shimada, Yoko; Matsui, Yasuto

    2018-05-01

    The Fukushima Dai-ichi nuclear power plant accident released radiocesium in large amounts. The released radionuclides contaminated much of the surrounding environment, including sewers in urban areas of Fukushima prefecture. In this study we attempted to identify and quantify the sources of radiocesium contamination in separate sewerage systems and developed a compartment model based on the Radionuclide Migration in Urban Environments and Drainage Systems (MUD) model. Measurements of the time-dependent radiocesium concentration in sewer sludge combined with meteorological, demographic, and radiocesium dietary intake data indicated that rainfall-derived inflow and infiltration (RDII) and human excretion were the chief contributors of radiocesium contamination in a separate sewerage system. The quantities of contamination derived from RDII and human excretion were calculated and used in the modified MUD model to simulate radiocesium contamination in sewers in three urban areas in Fukushima prefecture: Fukushima, Koriyama, and Nihonmatsu Cities. The Nash efficiency coefficient (0.88-0.92) and determination coefficient (0.89-0.93) calculated in an evaluation of our compartment model indicated that the model produced satisfactory results. We also used the model to estimate the total volume of sludge with radiocesium concentrations in excess of the clearance level, based on the number of months elapsed after the accident. Estimations by our model suggested that wastewater treatment plants (WWTPs) in Fukushima, Koriyama, and Nihonmatsu generated about 1,750,000m 3 of radioactive sludge in total, a level in good agreement with the real data. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Turbulence Model Comparisons and Reynolds Number Effects Over a High-Speed Aircraft at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    Rivers, Melissa B.; Wahls, Richard A.

    1999-01-01

    This paper gives the results of a grid study, a turbulence model study, and a Reynolds number effect study for transonic flows over a high-speed aircraft using the thin-layer, upwind, Navier-Stokes CFL3D code. The four turbulence models evaluated are the algebraic Baldwin-Lomax model with the Degani-Schiff modifications, the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, and Menter's two-equation Shear-Stress-Transport (SST) model. The flow conditions, which correspond to tests performed in the NASA Langley National Transonic Facility (NTF), are a Mach number of 0.90 and a Reynolds number of 30 million based on chord for a range of angle-of-attacks (1 degree to 10 degrees). For the Reynolds number effect study, Reynolds numbers of 10 and 80 million based on chord were also evaluated. Computed forces and surface pressures compare reasonably well with the experimental data for all four of the turbulence models. The Baldwin-Lomax model with the Degani-Schiff modifications and the one-equation Baldwin-Barth model show the best agreement with experiment overall. The Reynolds number effects are evaluated using the Baldwin-Lomax with the Degani-Schiff modifications and the Baldwin-Barth turbulence models. Five angles-of-attack were evaluated for the Reynolds number effect study at three different Reynolds numbers. More work is needed to determine the ability of CFL3D to accurately predict Reynolds number effects.

  10. On Models for Binomial Data with Random Numbers of Trials

    PubMed Central

    Comulada, W. Scott; Weiss, Robert E.

    2010-01-01

    Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514

  11. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  12. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  13. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  14. Identified state-space prediction model for aero-optical wavefronts

    NASA Astrophysics Data System (ADS)

    Faghihi, Azin; Tesch, Jonathan; Gibson, Steve

    2013-07-01

    A state-space disturbance model and associated prediction filter for aero-optical wavefronts are described. The model is computed by system identification from a sequence of wavefronts measured in an airborne laboratory. Estimates of the statistics and flow velocity of the wavefront data are shown and can be computed from the matrices in the state-space model without returning to the original data. Numerical results compare velocity values and power spectra computed from the identified state-space model with those computed from the aero-optical data.

  15. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  16. Identifying differentially expressed genes in cancer patients using a non-parameter Ising model.

    PubMed

    Li, Xumeng; Feltus, Frank A; Sun, Xiaoqian; Wang, James Z; Luo, Feng

    2011-10-01

    Identification of genes and pathways involved in diseases and physiological conditions is a major task in systems biology. In this study, we developed a novel non-parameter Ising model to integrate protein-protein interaction network and microarray data for identifying differentially expressed (DE) genes. We also proposed a simulated annealing algorithm to find the optimal configuration of the Ising model. The Ising model was applied to two breast cancer microarray data sets. The results showed that more cancer-related DE sub-networks and genes were identified by the Ising model than those by the Markov random field model. Furthermore, cross-validation experiments showed that DE genes identified by Ising model can improve classification performance compared with DE genes identified by Markov random field model. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A heuristic approach to determine an appropriate number of topics in topic modeling

    PubMed Central

    2015-01-01

    Background Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach. Methods and results Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed. Conclusion The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics. PMID:26424364

  18. A cognitive model for multidigit number reading: Inferences from individuals with selective impairments.

    PubMed

    Dotan, Dror; Friedmann, Naama

    2018-04-01

    We propose a detailed cognitive model of multi-digit number reading. The model postulates separate processes for visual analysis of the digit string and for oral production of the verbal number. Within visual analysis, separate sub-processes encode the digit identities and the digit order, and additional sub-processes encode the number's decimal structure: its length, the positions of 0, and the way it is parsed into triplets (e.g., 314987 → 314,987). Verbal production consists of a process that generates the verbal structure of the number, and another process that retrieves the phonological forms of each number word. The verbal number structure is first encoded in a tree-like structure, similarly to syntactic trees of sentences, and then linearized to a sequence of number-word specifiers. This model is based on an investigation of the number processing abilities of seven individuals with different selective deficits in number reading. We report participants with impairment in specific sub-processes of the visual analysis of digit strings - in encoding the digit order, in encoding the number length, or in parsing the digit string to triplets. Other participants were impaired in verbal production, making errors in the number structure (shifts of digits to another decimal position, e.g., 3,040 → 30,004). Their selective deficits yielded several dissociations: first, we found a double dissociation between visual analysis deficits and verbal production deficits. Second, several dissociations were found within visual analysis: a double dissociation between errors in digit order and errors in the number length; a dissociation between order/length errors and errors in parsing the digit string into triplets; and a dissociation between the processing of different digits - impaired order encoding of the digits 2-9, without errors in the 0 position. Third, within verbal production, a dissociation was found between digit shifts and substitutions of number words. A

  19. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  20. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  1. A recellularized human colon model identifies cancer driver genes

    PubMed Central

    Chen, Huanhuan Joyce; Wei, Zhubo; Sun, Jian; Bhattacharya, Asmita; Savage, David J; Serda, Rita; Mackeyev, Yuri; Curley, Steven A.; Bu, Pengcheng; Wang, Lihua; Chen, Shuibing; Cohen-Gould, Leona; Huang, Emina; Shen, Xiling; Lipkin, Steven M.; Copeland, Neal G.; Jenkins, Nancy A.; Shuler, Michael L.

    2016-01-01

    Refined cancer models are needed to bridge the gap between cell-line, animal and clinical research. Here we describe the engineering of an organotypic colon cancer model by recellularization of a native human matrix that contains cell-populated mucosa and an intact muscularis mucosa layer. This ex vivo system recapitulates the pathophysiological progression from APC-mutant neoplasia to submucosal invasive tumor. We used it to perform a Sleeping Beauty transposon mutagenesis screen to identify genes that cooperate with mutant APC in driving invasive neoplasia. 38 candidate invasion driver genes were identified, 17 of which have been previously implicated in colorectal cancer progression, including TCF7L2, TWIST2, MSH2, DCC and EPHB1/2. Six invasion driver genes that to our knowledge have not been previously described were validated in vitro using cell proliferation, migration and invasion assays, and ex vivo using recellularized human colon. These results demonstrate the utility of our organoid model for studying cancer biology. PMID:27398792

  2. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  3. Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure

    EPA Science Inventory

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...

  4. Dynamic Modeling for Development and Education: From Concepts to Numbers

    ERIC Educational Resources Information Center

    Van Geert, Paul

    2014-01-01

    The general aim of the article is to teach the reader how to transform conceptual models of change, development, and learning into mathematical expressions and how to use these equations to build dynamic models by means of the widely used spreadsheet program Excel. The explanation is supported by a number of Excel files, which the reader can…

  5. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Identifying Breast Cancer Oncogenes

    DTIC Science & Technology

    2009-10-01

    study by Boehm et al. (2007) identified IKBKE as a breast cancer oncogene that cooperates with HMLE -MEKDD to replace the function of myr-AKT in...1-0767 TITLE: Identifying Breast Cancer Oncogenes ~ PRINCIPAL INVESTIGATOR: Yashaswi Shrestha...Identifying Breast Cancer Oncogenes 5a. CONTRACT NUMBER W81XWH-08-1-0767 5b. GRANT NUMBER BC083061 - PreDoc 5c. PROGRAM ELEMENT NUMBER 6

  7. Two-dimensional Ising model on random lattices with constant coordination number

    NASA Astrophysics Data System (ADS)

    Schrauth, Manuel; Richter, Julian A. J.; Portela, Jefferson S. E.

    2018-02-01

    We study the two-dimensional Ising model on networks with quenched topological (connectivity) disorder. In particular, we construct random lattices of constant coordination number and perform large-scale Monte Carlo simulations in order to obtain critical exponents using finite-size scaling relations. We find disorder-dependent effective critical exponents, similar to diluted models, showing thus no clear universal behavior. Considering the very recent results for the two-dimensional Ising model on proximity graphs and the coordination number correlation analysis suggested by Barghathi and Vojta [Phys. Rev. Lett. 113, 120602 (2014), 10.1103/PhysRevLett.113.120602], our results indicate that the planarity and connectedness of the lattice play an important role on deciding whether the phase transition is stable against quenched topological disorder.

  8. Distributed watershed modeling of design storms to identify nonpoint source loading areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endreny, T.A.; Wood, E.F.

    1999-03-01

    Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yrmore » return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.« less

  9. A confidence building exercise in data and identifiability: Modeling cancer chemotherapy as a case study.

    PubMed

    Eisenberg, Marisa C; Jain, Harsh V

    2017-10-27

    Mathematical modeling has a long history in the field of cancer therapeutics, and there is increasing recognition that it can help uncover the mechanisms that underlie tumor response to treatment. However, making quantitative predictions with such models often requires parameter estimation from data, raising questions of parameter identifiability and estimability. Even in the case of structural (theoretical) identifiability, imperfect data and the resulting practical unidentifiability of model parameters can make it difficult to infer the desired information, and in some cases, to yield biologically correct inferences and predictions. Here, we examine parameter identifiability and estimability using a case study of two compartmental, ordinary differential equation models of cancer treatment with drugs that are cell cycle-specific (taxol) as well as non-specific (oxaliplatin). We proceed through model building, structural identifiability analysis, parameter estimation, practical identifiability analysis and its biological implications, as well as alternative data collection protocols and experimental designs that render the model identifiable. We use the differential algebra/input-output relationship approach for structural identifiability, and primarily the profile likelihood approach for practical identifiability. Despite the models being structurally identifiable, we show that without consideration of practical identifiability, incorrect cell cycle distributions can be inferred, that would result in suboptimal therapeutic choices. We illustrate the usefulness of estimating practically identifiable combinations (in addition to the more typically considered structurally identifiable combinations) in generating biologically meaningful insights. We also use simulated data to evaluate how the practical identifiability of the model would change under alternative experimental designs. These results highlight the importance of understanding the underlying mechanisms

  10. A vector space model approach to identify genetically related diseases.

    PubMed

    Sarkar, Indra Neil

    2012-01-01

    The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.

  11. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  12. Modeling particle number concentrations along Interstate 10 in El Paso, Texas

    PubMed Central

    Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias

    2014-01-01

    Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294

  13. A renormalization group approach to identifying the local quantum numbers in a many-body localized system

    NASA Astrophysics Data System (ADS)

    Pekker, David; Clark, Bryan K.; Oganesyan, Vadim; Refael, Gil; Tian, Binbin

    Many-body localization is a dynamical phase of matter that is characterized by the absence of thermalization. One of the key characteristics of many-body localized systems is the emergence of a large (possibly maximal) number of local integrals of motion (local quantum numbers) and corresponding conserved quantities. We formulate a robust algorithm for identifying these conserved quantities, based on Wegner's flow equations - a form of the renormalization group that works by disentangling the degrees of freedom of the system as opposed to integrating them out. We test our algorithm by explicit numerical comparison with more engineering based algorithms - Jacobi rotations and bi-partite matching. We find that the Wegner flow algorithm indeed produces the more local conserved quantities and is therefore more optimal. A preliminary analysis of the conserved quantities produced by the Wegner flow algorithm reveals the existence of at least two different localization lengthscales. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.

  14. High Reynolds number turbulence model of rotating shear flows

    NASA Astrophysics Data System (ADS)

    Masuda, S.; Ariga, I.; Koyama, H. S.

    1983-09-01

    A Reynolds stress closure model for rotating turbulent shear flows is developed. Special attention is paid to keeping the model constants independent of rotation. First, general forms of the model of a Reynolds stress equation and a dissipation rate equation are derived, the only restrictions of which are high Reynolds number and incompressibility. The model equations are then applied to two-dimensional equilibrium boundary layers and the effects of Coriolis acceleration on turbulence structures are discussed. Comparisons with the experimental data and with previous results in other external force fields show that there exists a very close analogy between centrifugal, buoyancy and Coriolis force fields. Finally, the model is applied to predict the two-dimensional boundary layers on rotating plane walls. Comparisons with existing data confirmed its capability of predicting mean and turbulent quantities without employing any empirical relations in rotating fields.

  15. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    PubMed

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  16. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation

    PubMed Central

    2017-01-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability. PMID:29186132

  17. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation.

    PubMed

    Villaverde, Alejandro F; Banga, Julio R

    2017-11-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability.

  18. A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data

    PubMed Central

    DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus

    2014-01-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706

  19. A model-based approach for identifying signatures of ancient balancing selection in genetic data.

    PubMed

    DeGiorgio, Michael; Lohmueller, Kirk E; Nielsen, Rasmus

    2014-08-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.

  20. Identifying and Selecting Plants for the Landscape. Volume 23, Number 5.

    ERIC Educational Resources Information Center

    Rodekohr, Sherie; Harris, Clark Richard

    This handbook on identifying and selecting landscape plants can be used as a reference in landscaping courses or on an individual basis. The first of two sections, Identifying Plants for the Landscape, contains the following tables: shade tree identification; flowering tree identification; evergreen tree identification; flowering shrub…

  1. Correlation between Reynolds number and eccentricity effect in stenosed artery models.

    PubMed

    Javadzadegan, Ashkan; Shimizu, Yasutomo; Behnia, Masud; Ohta, Makoto

    2013-01-01

    Flow recirculation and shear strain are physiological processes within coronary arteries which are associated with pathogenic biological pathways. Distinct Quite apart from coronary stenosis severity, lesion eccentricity can cause flow recirculation and affect shear strain levels within human coronary arteries. The aim of this study is to analyse the effect of lesion eccentricity on the transient flow behaviour in a model of a coronary artery and also to investigate the correlation between Reynolds number (Re) and the eccentricity effect on flow behaviour. A transient particle image velocimetry (PIV) experiment was implemented in two silicone based models with 70% diameter stenosis, one with eccentric stenosis and one with concentric stenosis. At different times throughout the flow cycle, the eccentric model was always associated with a greater recirculation zone length, maximum shear strain rate and maximum axial velocity; however, the highest and lowest impacts of eccentricity were on the recirculation zone length and maximum shear strain rate, respectively. Analysis of the results revealed a negative correlation between the Reynolds number (Re) and the eccentricity effect on maximum axial velocity, maximum shear strain rate and recirculation zone length. As Re number increases the eccentricity effect on the flow behavior becomes negligible.

  2. Nimesulide, a COX-2 inhibitor, does not reduce lesion size or number in a nude mouse model of endometriosis.

    PubMed

    Hull, M L; Prentice, A; Wang, D Y; Butt, R P; Phillips, S C; Smith, S K; Charnock-Jones, D S

    2005-02-01

    Women with endometriosis have elevated levels of cyclooxygenase-2 (COX-2) in peritoneal macrophages and endometriotic tissue. Inhibition of COX-2 has been shown to reduce inflammation, angiogenesis and cellular proliferation. It may also downregulate aromatase activity in ectopic endometrial lesions. Ectopic endometrial establishment and growth are therefore likely to be suppressed in the presence of COX-2 inhibitors. We hypothesized that COX-2 inhibition would reduce the size and number of ectopic human endometrial lesions in a nude mouse model of endometriosis. The selective COX-2 inhibitor, nimesulide, was administered to estrogen-supplemented nude mice implanted with human endometrial tissue. Ten days after implantation, the number and size of ectopic endometrial lesions were evaluated and compared with lesions from a control group. Immunohistochemical assessment of vascular development and macrophage and myofibroblast infiltration in control and treated lesions was performed. There was no difference in the number or size of ectopic endometrial lesions in control and nimesulide-treated nude mice. Nimesulide did not induce a visually identifiable difference in blood vessel development or macrophage or myofibroblast infiltration in nude mouse explants. The hypothesized biological properties of COX-2 inhibition did not influence lesion number or size in the nude mouse model of endometriosis.

  3. Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Hassan, H. A.; Baurle, R. A.

    2006-01-01

    A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.

  4. The Influence of Investor Number on a Microscopic Market Model

    NASA Astrophysics Data System (ADS)

    Hellthaler, T.

    The stock market model of Levy, Persky, Solomon is simulated for much larger numbers of investors. While small markets can lead to realistically looking prices, the resulting prices of large markets oscillate smoothly in a semi-regular fashion.

  5. Identify High-Quality Protein Structural Models by Enhanced K-Means.

    PubMed

    Wu, Hongjie; Li, Haiou; Jiang, Min; Chen, Cheng; Lv, Qiang; Wu, Chuang

    2017-01-01

    Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K -means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K -means clustering ( SK -means), whereas the other employs squared distance to optimize the initial centroids ( K -means++). Our results showed that SK -means and K -means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K -means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK -means and K -means++ demonstrated substantial improvements relative to results from SPICKER and classical K -means.

  6. Identify High-Quality Protein Structural Models by Enhanced K-Means

    PubMed Central

    Li, Haiou; Chen, Cheng; Lv, Qiang; Wu, Chuang

    2017-01-01

    Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K-means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K-means clustering (SK-means), whereas the other employs squared distance to optimize the initial centroids (K-means++). Our results showed that SK-means and K-means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K-means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK-means and K-means++ demonstrated substantial improvements relative to results from SPICKER and classical K-means. PMID:28421198

  7. Some predictions of the attached eddy model for a high Reynolds number boundary layer.

    PubMed

    Nickels, T B; Marusic, I; Hafez, S; Hutchins, N; Chong, M S

    2007-03-15

    Many flows of practical interest occur at high Reynolds number, at which the flow in most of the boundary layer is turbulent, showing apparently random fluctuations in velocity across a wide range of scales. The range of scales over which these fluctuations occur increases with the Reynolds number and hence high Reynolds number flows are difficult to compute or predict. In this paper, we discuss the structure of these flows and describe a physical model, based on the attached eddy hypothesis, which makes predictions for the statistical properties of these flows and their variation with Reynolds number. The predictions are shown to compare well with the results from recent experiments in a new purpose-built high Reynolds number facility. The model is also shown to provide a clear physical explanation for the trends in the data. The limits of applicability of the model are also discussed.

  8. Hamiltonian identifiability assisted by single-probe measurement

    NASA Astrophysics Data System (ADS)

    Sone, Akira; Cappellaro, Paola; Quantum Engineering Group Team

    2017-04-01

    We study the Hamiltonian identifiability of a many-body spin- 1 / 2 system assisted by the measurement on a single quantum probe based on the eigensystem realization algorithm (ERA) approach employed in. We demonstrate a potential application of Gröbner basis to the identifiability test of the Hamiltonian, and provide the necessary experimental resources, such as the lower bound in the number of the required sampling points, the upper bound in total required evolution time, and thus the total measurement time. Focusing on the examples of the identifiability in the spin chain model with nearest-neighbor interaction, we classify the spin-chain Hamiltonian based on its identifiability, and provide the control protocols to engineer the non-identifiable Hamiltonian to be an identifiable Hamiltonian.

  9. Parameterized reduced order models from a single mesh using hyper-dual numbers

    NASA Astrophysics Data System (ADS)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  10. GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models.

    PubMed

    Ligon, Thomas S; Fröhlich, Fabian; Chis, Oana T; Banga, Julio R; Balsa-Canto, Eva; Hasenauer, Jan

    2018-04-15

    Mathematical modeling using ordinary differential equations is used in systems biology to improve the understanding of dynamic biological processes. The parameters of ordinary differential equation models are usually estimated from experimental data. To analyze a priori the uniqueness of the solution of the estimation problem, structural identifiability analysis methods have been developed. We introduce GenSSI 2.0, an advancement of the software toolbox GenSSI (Generating Series for testing Structural Identifiability). GenSSI 2.0 is the first toolbox for structural identifiability analysis to implement Systems Biology Markup Language import, state/parameter transformations and multi-experiment structural identifiability analysis. In addition, GenSSI 2.0 supports a range of MATLAB versions and is computationally more efficient than its previous version, enabling the analysis of more complex models. GenSSI 2.0 is an open-source MATLAB toolbox and available at https://github.com/genssi-developer/GenSSI. thomas.ligon@physik.uni-muenchen.de or jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.

  11. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a

  12. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    NASA Astrophysics Data System (ADS)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-03-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.

  13. Using Bar Representations as a Model for Connecting Concepts of Rational Number.

    ERIC Educational Resources Information Center

    Middleton, James A.; van den Heuvel-Panhuizen, Marja; Shew, Julia A.

    1998-01-01

    Examines bar models as graphical representations of rational numbers and presents related real life problems. Concludes that, through pairing the fraction bars with ratio tables and other ways of teaching numbers, numeric strategies become connected with visual strategies that allow students with diverse ways of thinking to share their…

  14. Identifying traits for genotypic adaptation using crop models.

    PubMed

    Ramirez-Villegas, Julian; Watson, James; Challinor, Andrew J

    2015-06-01

    Genotypic adaptation involves the incorporation of novel traits in crop varieties so as to enhance food productivity and stability and is expected to be one of the most important adaptation strategies to future climate change. Simulation modelling can provide the basis for evaluating the biophysical potential of crop traits for genotypic adaptation. This review focuses on the use of models for assessing the potential benefits of genotypic adaptation as a response strategy to projected climate change impacts. Some key crop responses to the environment, as well as the role of models and model ensembles for assessing impacts and adaptation, are first reviewed. Next, the review describes crop-climate models can help focus the development of future-adapted crop germplasm in breeding programmes. While recently published modelling studies have demonstrated the potential of genotypic adaptation strategies and ideotype design, it is argued that, for model-based studies of genotypic adaptation to be used in crop breeding, it is critical that modelled traits are better grounded in genetic and physiological knowledge. To this aim, two main goals need to be pursued in future studies: (i) a better understanding of plant processes that limit productivity under future climate change; and (ii) a coupling between genetic and crop growth models-perhaps at the expense of the number of traits analysed. Importantly, the latter may imply additional complexity (and likely uncertainty) in crop modelling studies. Hence, appropriately constraining processes and parameters in models and a shift from simply quantifying uncertainty to actually quantifying robustness towards modelling choices are two key aspects that need to be included into future crop model-based analyses of genotypic adaptation. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  15. Time series model for forecasting the number of new admission inpatients.

    PubMed

    Zhou, Lingling; Zhao, Ping; Wu, Dongdong; Cheng, Cheng; Huang, Hao

    2018-06-15

    Hospital crowding is a rising problem, effective predicting and detecting managment can helpful to reduce crowding. Our team has successfully proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in the schistosomiasis and hand, foot, and mouth disease forecasting study. In this paper, our aim is to explore the application of the hybrid ARIMA-NARNN model to track the trends of the new admission inpatients, which provides a methodological basis for reducing crowding. We used the single seasonal ARIMA (SARIMA), NARNN and the hybrid SARIMA-NARNN model to fit and forecast the monthly and daily number of new admission inpatients. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the forecasting performance among the three models. The modeling time range of monthly data included was from January 2010 to June 2016, July to October 2016 as the corresponding testing data set. The daily modeling data set was from January 4 to September 4, 2016, while the testing time range included was from September 5 to October 2, 2016. For the monthly data, the modeling RMSE and the testing RMSE, MAE and MAPE of SARIMA-NARNN model were less than those obtained from the single SARIMA or NARNN model, but the MAE and MAPE of modeling performance of SARIMA-NARNN model did not improve. For the daily data, all RMSE, MAE and MAPE of NARNN model were the lowest both in modeling stage and testing stage. Hybrid model does not necessarily outperform its constituents' performances. It is worth attempting to explore the reliable model to forecast the number of new admission inpatients from different data.

  16. Beyond Natural Numbers: Negative Number Representation in Parietal Cortex

    PubMed Central

    Blair, Kristen P.; Rosenberg-Lee, Miriam; Tsang, Jessica M.; Schwartz, Daniel L.; Menon, Vinod

    2012-01-01

    Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation. PMID:22363276

  17. Beyond natural numbers: negative number representation in parietal cortex.

    PubMed

    Blair, Kristen P; Rosenberg-Lee, Miriam; Tsang, Jessica M; Schwartz, Daniel L; Menon, Vinod

    2012-01-01

    Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation.

  18. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    PubMed Central

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  19. Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport

    NASA Astrophysics Data System (ADS)

    Verma, Siddhartha

    The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor

  20. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  1. Revisiting Turbulence Model Validation for High-Mach Number Axisymmetric Compression Corner Flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Rumsey, Christopher L.; Huang, George P.

    2015-01-01

    Two axisymmetric shock-wave/boundary-layer interaction (SWBLI) cases are used to benchmark one- and two-equation Reynolds-averaged Navier-Stokes (RANS) turbulence models. This validation exercise was executed in the philosophy of the NASA Turbulence Modeling Resource and the AIAA Turbulence Model Benchmarking Working Group. Both SWBLI cases are from the experiments of Kussoy and Horstman for axisymmetric compression corner geometries with SWBLI inducing flares of 20 and 30 degrees, respectively. The freestream Mach number was approximately 7. The RANS closures examined are the Spalart-Allmaras one-equation model and the Menter family of kappa - omega two equation models including the Baseline and Shear Stress Transport formulations. The Wind-US and CFL3D RANS solvers are employed to simulate the SWBLI cases. Comparisons of RANS solutions to experimental data are made for a boundary layer survey plane just upstream of the SWBLI region. In the SWBLI region, comparisons of surface pressure and heat transfer are made. The effects of inflow modeling strategy, grid resolution, grid orthogonality, turbulent Prandtl number, and code-to-code variations are also addressed.

  2. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  3. A Compartmental Model for Computing Cell Numbers in CFSE-based Lymphocyte Proliferation Assays

    DTIC Science & Technology

    2012-01-31

    of the “expected relative Kullback - Leibler distance” ( information loss) when a model is used to describe a data set [23...deconvolution of the data into cell numbers, it cannot be used to accurately assess the number of cells in a particular generation. This information could be...notation is meant to emphasize the dependence of the estimate on the particular data set used to fit the model. It should be noted that, rather

  4. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

    PubMed Central

    2013-01-01

    Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

  5. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  6. Identifying effective connectivity parameters in simulated fMRI: a direct comparison of switching linear dynamic system, stochastic dynamic causal, and multivariate autoregressive models

    PubMed Central

    Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry

    2013-01-01

    The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258

  7. Computer simulation models as tools for identifying research needs: A black duck population model

    USGS Publications Warehouse

    Ringelman, J.K.; Longcore, J.R.

    1980-01-01

    Existing data on the mortality and production rates of the black duck (Anas rubripes) were used to construct a WATFIV computer simulation model. The yearly cycle was divided into 8 phases: hunting, wintering, reproductive, molt, post-molt, and juvenile dispersal mortality, and production from original and renesting attempts. The program computes population changes for sex and age classes during each phase. After completion of a standard simulation run with all variable default values in effect, a sensitivity analysis was conducted by changing each of 50 input variables, 1 at a time, to assess the responsiveness of the model to changes in each variable. Thirteen variables resulted in a substantial change in population level. Adult mortality factors were important during hunting and wintering phases. All production and mortality associated with original nesting attempts were sensitive, as was juvenile dispersal mortality. By identifying those factors which invoke the greatest population change, and providing an indication of the accuracy required in estimating these factors, the model helps to identify those variables which would be most profitable topics for future research.

  8. Identifying biological concepts from a protein-related corpus with a probabilistic topic model

    PubMed Central

    Zheng, Bin; McLean, David C; Lu, Xinghua

    2006-01-01

    Background Biomedical literature, e.g., MEDLINE, contains a wealth of knowledge regarding functions of proteins. Major recurring biological concepts within such text corpora represent the domains of this body of knowledge. The goal of this research is to identify the major biological topics/concepts from a corpus of protein-related MEDLINE© titles and abstracts by applying a probabilistic topic model. Results The latent Dirichlet allocation (LDA) model was applied to the corpus. Based on the Bayesian model selection, 300 major topics were extracted from the corpus. The majority of identified topics/concepts was found to be semantically coherent and most represented biological objects or concepts. The identified topics/concepts were further mapped to the controlled vocabulary of the Gene Ontology (GO) terms based on mutual information. Conclusion The major and recurring biological concepts within a collection of MEDLINE documents can be extracted by the LDA model. The identified topics/concepts provide parsimonious and semantically-enriched representation of the texts in a semantic space with reduced dimensionality and can be used to index text. PMID:16466569

  9. Genome-wide screening identifies a KCNIP1 copy number variant as a genetic predictor for atrial fibrillation

    PubMed Central

    Tsai, Chia-Ti; Hsieh, Chia-Shan; Chang, Sheng-Nan; Chuang, Eric Y.; Ueng, Kwo-Chang; Tsai, Chin-Feng; Lin, Tsung-Hsien; Wu, Cho-Kai; Lee, Jen-Kuang; Lin, Lian-Yu; Wang, Yi-Chih; Yu, Chih-Chieh; Lai, Ling-Ping; Tseng, Chuen-Den; Hwang, Juey-Jen; Chiang, Fu-Tien; Lin, Jiunn-Lee

    2016-01-01

    Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia. Previous genome-wide association studies had identified single-nucleotide polymorphisms in several genomic regions to be associated with AF. In human genome, copy number variations (CNVs) are known to contribute to disease susceptibility. Using a genome-wide multistage approach to identify AF susceptibility CNVs, we here show a common 4,470-bp diallelic CNV in the first intron of potassium interacting channel 1 gene (KCNIP1) is strongly associated with AF in Taiwanese populations (odds ratio=2.27 for insertion allele; P=6.23 × 10−24). KCNIP1 insertion is associated with higher KCNIP1 mRNA expression. KCNIP1-encoded protein potassium interacting channel 1 (KCHIP1) is physically associated with potassium Kv channels and modulates atrial transient outward current in cardiac myocytes. Overexpression of KCNIP1 results in inducible AF in zebrafish. In conclusions, a common CNV in KCNIP1 gene is a genetic predictor of AF risk possibly pointing to a functional pathway. PMID:26831368

  10. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases.

    PubMed

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-08-01

    Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Hospital discharges from 11 Mayo Clinic hospitals during January 2008-September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. © Health Research and Educational Trust.

  11. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases

    PubMed Central

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-01-01

    Objective Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Data Sources/Study Setting Hospital discharges from 11 Mayo Clinic hospitals during January 2008–September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Study Design Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. Data Collection/Extraction Methods De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Principal Findings Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Conclusion Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. PMID:26073819

  12. A model to identify high crash road segments with the dynamic segmentation method.

    PubMed

    Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan

    2014-12-01

    Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Modeling Success: Using Preenrollment Data to Identify Academically At-Risk Students

    ERIC Educational Resources Information Center

    Gansemer-Topf, Ann M.; Compton, Jonathan; Wohlgemuth, Darin; Forbes, Greg; Ralston, Ekaterina

    2015-01-01

    Improving student success and degree completion is one of the core principles of strategic enrollment management. To address this principle, institutional data were used to develop a statistical model to identify academically at-risk students. The model employs multiple linear regression techniques to predict students at risk of earning below a…

  14. Use of autocorrelation scanning in DNA copy number analysis.

    PubMed

    Zhang, Liangcai; Zhang, Li

    2013-11-01

    Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.

  15. Turbulence modeling and combustion simulation in porous media under high Peclet number

    NASA Astrophysics Data System (ADS)

    Moiseev, Andrey A.; Savin, Andrey V.

    2018-05-01

    Turbulence modelling in porous flows and burning still remains not completely clear until now. Undoubtedly, conventional turbulence models must work well under high Peclet numbers when porous channels shape is implemented in details. Nevertheless, the true turbulent mixing takes place at micro-scales only, and the dispersion mixing works at macro-scales almost independent from true turbulence. The dispersion mechanism is characterized by the definite space scale (scale of the porous structure) and definite velocity scale (filtration velocity). The porous structure is stochastic one usually, and this circumstance allows applying the analogy between space-time-stochastic true turbulence and the dispersion flow which is stochastic in space only, when porous flow is simulated at the macro-scale level. Additionally, the mentioned analogy allows applying well-known turbulent combustion models in simulations of porous combustion under high Peclet numbers.

  16. Low Reynolds number k-epsilon modelling with the aid of direct simulation data

    NASA Technical Reports Server (NTRS)

    Rodi, W.; Mansour, N. N.

    1993-01-01

    The constant C sub mu and the near-wall damping function f sub mu in the eddy-viscosity relation of the k-epsilon model are evaluated from direct numerical simulation (DNS) data for developed channel and boundary layer flow at two Reynolds numbers each. Various existing f sub mu model functions are compared with the DNS data, and a new function is fitted to the high-Reynolds-number channel flow data. The epsilon-budget is computed for the fully developed channel flow. The relative magnitude of the terms in the epsilon-equation is analyzed with the aid of scaling arguments, and the parameter governing this magnitude is established. Models for the sum of all source and sink terms in the epsilon-equation are tested against the DNS data, and an improved model is proposed.

  17. Baseline recruitment and analyses of nonresponse of the Heinz Nixdorf Recall Study: identifiability of phone numbers as the major determinant of response.

    PubMed

    Stang, A; Moebus, S; Dragano, N; Beck, E M; Möhlenkamp, S; Schmermund, A; Siegrist, J; Erbel, R; Jöckel, K H

    2005-01-01

    The Heinz Nixdorf Recall Study is an ongoing population-based prospective cardiovascular cohort study of the Ruhr area in Germany. This paper focuses on the recruitment strategy and its response results including a comparison of participants of the baseline examination with nonparticipants. Random samples of the general population were drawn from residents' registration offices including men and women aged 45-74 years. We used a multimode contact approach including an invitational letter, a maximum of two reminder letters and phone calls for the recruitment of study subjects. Nonparticipants were asked to fill in a short questionnaire. We calculated proportions of response, contact, cooperation and recruitment efficacy to characterize the participation. Overall, 4487 eligible subjects participated in our study. Although the elderly (65-75 years) had the highest contact proportion, the cooperation proportion was the lowest among both men and women. The recruitment efficacy proportion was highest among subjects aged 55-64 years. The identifiability of the phone number of study subjects was an important determinant of response. The recruitment efficacy proportion among subjects without an identified phone number was 11.4% as compared to 65.3% among subjects with an identified phone number. The majority of subjects agreed to participate after one invitational letter only (52.6%). A second reminding letter contributed only very few participants to the study. Nonparticipants were more often current smokers than participants and less often belonged to the highest social class. Living in a regular relationship with a partner was more often reported among participants than nonparticipants.

  18. Using electroretinograms and multi-model inference to identify spectral classes of photoreceptors and relative opsin expression levels

    PubMed Central

    2017-01-01

    Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike’s information criterion (AICc) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis, the branchiopod water flea, Daphnia magna, normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus, which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei. The modeling approach presented here will be useful in selecting the most likely

  19. Using electroretinograms and multi-model inference to identify spectral classes of photoreceptors and relative opsin expression levels.

    PubMed

    Lessios, Nicolas

    2017-01-01

    Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike's information criterion (AIC c ) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis , the branchiopod water flea, Daphnia magna , normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus , which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei . The modeling approach presented here will be useful in selecting the most

  20. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    PubMed

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for

  1. Drosophila Cancer Models Identify Functional Differences between Ret Fusions.

    PubMed

    Levinson, Sarah; Cagan, Ross L

    2016-09-13

    We generated and compared Drosophila models of RET fusions CCDC6-RET and NCOA4-RET. Both RET fusions directed cells to migrate, delaminate, and undergo EMT, and both resulted in lethality when broadly expressed. In all phenotypes examined, NCOA4-RET was more severe than CCDC6-RET, mirroring their effects on patients. A functional screen against the Drosophila kinome and a library of cancer drugs found that CCDC6-RET and NCOA4-RET acted through different signaling networks and displayed distinct drug sensitivities. Combining data from the kinome and drug screens identified the WEE1 inhibitor AZD1775 plus the multi-kinase inhibitor sorafenib as a synergistic drug combination that is specific for NCOA4-RET. Our work emphasizes the importance of identifying and tailoring a patient's treatment to their specific RET fusion isoform and identifies a multi-targeted therapy that may prove effective against tumors containing the NCOA4-RET fusion. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. A net reproductive number for periodic matrix models.

    PubMed

    Cushing, J M; Ackleh, A S

    2012-01-01

    We give a definition of a net reproductive number R (0) for periodic matrix models of the type used to describe the dynamics of a structured population with periodic parameters. The definition is based on the familiar method of studying a periodic map by means of its (period-length) composite. This composite has an additive decomposition that permits a generalization of the Cushing-Zhou definition of R (0) in the autonomous case. The value of R (0) determines whether the population goes extinct (R (0)<1) or persists (R (0)>1). We discuss the biological interpretation of this definition and derive formulas for R (0) for two cases: scalar periodic maps of arbitrary period and periodic Leslie models of period 2. We illustrate the use of the definition by means of several examples and by applications to case studies found in the literature. We also make some comparisons of this definition of R (0) with another definition given recently by Bacaër.

  3. Identifiability of tree-child phylogenetic networks under a probabilistic recombination-mutation model of evolution.

    PubMed

    Francis, Andrew; Moulton, Vincent

    2018-06-07

    Phylogenetic networks are an extension of phylogenetic trees which are used to represent evolutionary histories in which reticulation events (such as recombination and hybridization) have occurred. A central question for such networks is that of identifiability, which essentially asks under what circumstances can we reliably identify the phylogenetic network that gave rise to the observed data? Recently, identifiability results have appeared for networks relative to a model of sequence evolution that generalizes the standard Markov models used for phylogenetic trees. However, these results are quite limited in terms of the complexity of the networks that are considered. In this paper, by introducing an alternative probabilistic model for evolution along a network that is based on some ground-breaking work by Thatte for pedigrees, we are able to obtain an identifiability result for a much larger class of phylogenetic networks (essentially the class of so-called tree-child networks). To prove our main theorem, we derive some new results for identifying tree-child networks combinatorially, and then adapt some techniques developed by Thatte for pedigrees to show that our combinatorial results imply identifiability in the probabilistic setting. We hope that the introduction of our new model for networks could lead to new approaches to reliably construct phylogenetic networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. From Connectivity Models to Region Labels: Identifying Foci of a Neurological Disorder

    PubMed Central

    Venkataraman, Archana; Kubicki, Marek; Golland, Polina

    2014-01-01

    We propose a novel approach to identify the foci of a neurological disorder based on anatomical and functional connectivity information. Specifically, we formulate a generative model that characterizes the network of abnormal functional connectivity emanating from the affected foci. This allows us to aggregate pairwise connectivity changes into a region-based representation of the disease. We employ the variational expectation-maximization algorithm to fit the model and subsequently identify both the afflicted regions and the differences in connectivity induced by the disorder. We demonstrate our method on a population study of schizophrenia. PMID:23864168

  5. Type 2 diabetes mellitus disease risk genes identified by genome wide copy number variation scan in normal populations.

    PubMed

    Prabhanjan, Manasa; Suresh, Raviraj V; Murthy, Megha N; Ramachandra, Nallur B

    2016-03-01

    To identify the role of copy number variations (CNVs) on disease risk genes and its effect on disease phenotypes in type 2 diabetes mellitus (T2DM) in 12 random populations using high throughput arrays. CNV analysis was carried out on a total of 1715 individuals from 12 populations, from ArrayExpress Archive of the European Bioinformatics Institute along with our subjects using Affymetrix Genome Wide SNP 6.0 array. CNV effect on T2DM genes were analyzed using several bioinformatics tools and a molecular protein interaction network was constructed to identify the disease mechanism altered by the CNVs. Analysis showed 34.4% of the total population to be under CNV burden for T2DM, with 83 disease causal and associated genes being under CNV influence. Hotspots were identified on chromosomes 22, 12, 6, 19 and 11.Overlap studies with case cohorts revealed significant disease risk genes such as EGFR, E2F1, PPP1R3A, HLA and TSPAN8. CNVs play a significant role in predisposing T2DM in normal cohorts and contribute to the phenotypic effects. Thus, CNVs should be considered as one of the major contributors in predisposition of the disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Fermion number of twisted kinks in the NJL2 model revisited

    NASA Astrophysics Data System (ADS)

    Thies, Michael

    2018-03-01

    As a consequence of axial current conservation, fermions cannot be bound in localized lumps in the massless Nambu-Jona-Lasinio model. In the case of twisted kinks, this manifests itself in a cancellation between the valence fermion density and the fermion density induced in the Dirac sea. To attribute the correct fermion number to these bound states requires an infrared regularization. Recently, this has been achieved by introducing a bare fermion mass, at least in the nonrelativistic regime of small twist angles and fermion numbers. Here, we propose a simpler regularization using a finite box which preserves integrability and can be applied at any twist angle. A consistent and physically plausible assignment of fermion number to all twisted kinks emerges.

  7. Application of latent growth modeling to identify different working life trajectories: the case of the Spanish WORKss cohort.

    PubMed

    Serra, Laura; López Gómez, María Andrée; Sanchez-Niubo, Albert; Delclos, George L; Benavides, Fernando G

    2017-01-01

    Objective The aim of this study was to describe the application of latent class growth analysis (LCGA) to identify different working life trajectories (WLT) using employed working time by year as a repeated measure. Methods Trajectories are estimated using LCGA, which considers all individuals within a trajectory to be homogeneous. The methodology was applied to a subsample of the Spanish WORKing life Social Security (WORKss) cohort, limited to persons born 1956-1965 (N=247 475). The number of days worked per year is used as a repeated measure across 32 time points (1981-2013). Results According to the model-fit results and further guided by expert knowledge, a four WTL model was selected as the optimal approach: WLT1 or "high labor force participation" (N=99 591; 40.2%); WLT2 or "decreased labor force participation" (N= 22 846; 9.2%); WLT3 or "increased labor force participation" (N=59 213; 23.9%); and WLT4 or "low labor force participation" (N=65 827; 26.6%). WLT1 consisted mainly of men with more years of work experience (>19 years) while WLT4 was mainly composed by women with <9 years. The other two trajectories had opposite trends and no sex differences. The occupational category variable had little influence in the trajectories. Conclusions Longitudinal data that are regularly collected by administrative systems can benefit from LCGA approaches to identify different trajectory patterns that may be associated with an outcome of interest. In occupational epidemiology, this study represents a step forward by using this modeling approach to identify different WLT.

  8. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  9. Identifying and modeling the structural discontinuities of human interactions

    NASA Astrophysics Data System (ADS)

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  10. Identifying and modeling the structural discontinuities of human interactions

    PubMed Central

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-01-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales. PMID:28443647

  11. Identifying and modeling the structural discontinuities of human interactions.

    PubMed

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-26

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  12. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  13. Identifying pleiotropic genes in genome-wide association studies from related subjects using the linear mixed model and Fisher combination function.

    PubMed

    Yang, James J; Williams, L Keoki; Buu, Anne

    2017-08-24

    A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.

  14. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    PubMed

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  15. Can the vector space model be used to identify biological entity activities?

    PubMed Central

    2011-01-01

    Background Biological systems are commonly described as networks of entity interactions. Some interactions are already known and integrate the current knowledge in life sciences. Others remain unknown for long periods of time and are frequently discovered by chance. In this work we present a model to predict these unknown interactions from a textual collection using the vector space model (VSM), a well known and established information retrieval model. We have extended the VSM ability to retrieve information using a transitive closure approach. Our objective is to use the VSM to identify the known interactions from the literature and construct a network. Based on interactions established in the network our model applies the transitive closure in order to predict and rank new interactions. Results We have tested and validated our model using a collection of patent claims issued from 1976 to 2005. From 266,528 possible interactions in our network, the model identified 1,027 known interactions and predicted 3,195 new interactions. Iterating the model according to patent issue dates, interactions found in a given past year were often confirmed by patent claims not in the collection and issued in more recent years. Most confirmation patent claims were found at the top 100 new interactions obtained from each subnetwork. We have also found papers on the Web which confirm new inferred interactions. For instance, the best new interaction inferred by our model relates the interaction between the adrenaline neurotransmitter and the androgen receptor gene. We have found a paper that reports the partial dependence of the antiapoptotic effect of adrenaline on androgen receptor. Conclusions The VSM extended with a transitive closure approach provides a good way to identify biological interactions from textual collections. Specifically for the context of literature-based discovery, the extended VSM contributes to identify and rank relevant new interactions even if these

  16. Identifying western yellow-billed cuckoo breeding habitat with a dual modelling approach

    USGS Publications Warehouse

    Johnson, Matthew J.; Hatten, James R.; Holmes, Jennifer A.; Shafroth, Patrick B.

    2017-01-01

    The western population of the yellow-billed cuckoo (Coccyzus americanus) was recently listed as threatened under the federal Endangered Species Act. Yellow-billed cuckoo conservation efforts require the identification of features and area requirements associated with high quality, riparian forest habitat at spatial scales that range from nest microhabitat to landscape, as well as lower-suitability areas that can be enhanced or restored. Spatially explicit models inform conservation efforts by increasing ecological understanding of a target species, especially at landscape scales. Previous yellow-billed cuckoo modelling efforts derived plant-community maps from aerial photography, an expensive and oftentimes inconsistent approach. Satellite models can remotely map vegetation features (e.g., vegetation density, heterogeneity in vegetation density or structure) across large areas with near perfect repeatability, but they usually cannot identify plant communities. We used aerial photos and satellite imagery, and a hierarchical spatial scale approach, to identify yellow-billed cuckoo breeding habitat along the Lower Colorado River and its tributaries. Aerial-photo and satellite models identified several key features associated with yellow-billed cuckoo breeding locations: (1) a 4.5 ha core area of dense cottonwood-willow vegetation, (2) a large native, heterogeneously dense forest (72 ha) around the core area, and (3) moderately rough topography. The odds of yellow-billed cuckoo occurrence decreased rapidly as the amount of tamarisk cover increased or when cottonwood-willow vegetation was limited. We achieved model accuracies of 75–80% in the project area the following year after updating the imagery and location data. The two model types had very similar probability maps, largely predicting the same areas as high quality habitat. While each model provided unique information, a dual-modelling approach provided a more complete picture of yellow-billed cuckoo habitat

  17. Tomography of atomic number and density of materials using dual-energy imaging and the Alvarez and Macovski attenuation model

    NASA Astrophysics Data System (ADS)

    Paziresh, M.; Kingston, A. M.; Latham, S. J.; Fullagar, W. K.; Myers, G. M.

    2016-06-01

    Dual-energy computed tomography and the Alvarez and Macovski [Phys. Med. Biol. 21, 733 (1976)] transmitted intensity (AMTI) model were used in this study to estimate the maps of density (ρ) and atomic number (Z) of mineralogical samples. In this method, the attenuation coefficients are represented [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)] in the form of the two most important interactions of X-rays with atoms that is, photoelectric absorption (PE) and Compton scattering (CS). This enables material discrimination as PE and CS are, respectively, dependent on the atomic number (Z) and density (ρ) of materials [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)]. Dual-energy imaging is able to identify sample materials even if the materials have similar attenuation coefficients at single-energy spectrum. We use the full model rather than applying one of several applied simplified forms [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976); Siddiqui et al., SPE Annual Technical Conference and Exhibition (Society of Petroleum Engineers, 2004); Derzhi, U.S. patent application 13/527,660 (2012); Heismann et al., J. Appl. Phys. 94, 2073-2079 (2003); Park and Kim, J. Korean Phys. Soc. 59, 2709 (2011); Abudurexiti et al., Radiol. Phys. Technol. 3, 127-135 (2010); and Kaewkhao et al., J. Quant. Spectrosc. Radiat. Transfer 109, 1260-1265 (2008)]. This paper describes the tomographic reconstruction of ρ and Z maps of mineralogical samples using the AMTI model. The full model requires precise knowledge of the X-ray energy spectra and calibration of PE and CS constants and exponents of atomic number and energy that were estimated based on fits to simulations and calibration measurements. The estimated ρ and Z images of the samples used in this paper yield average relative errors of 2.62% and 1.19% and maximum relative errors of 2.64% and 7.85%, respectively. Furthermore, we demonstrate that the method accounts for the beam hardening effect in density (ρ) and

  18. Modeling users' activity on Twitter networks: validation of Dunbar's number

    NASA Astrophysics Data System (ADS)

    Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2012-02-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  19. Identifying musical pieces from fMRI data using encoding and decoding models.

    PubMed

    Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge

    2018-02-02

    Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

  20. On the drag of model dendrite fragments at low Reynolds number

    NASA Technical Reports Server (NTRS)

    Zakhem, R.; Weidman, P. D.; Degroh, H. C., III

    1993-01-01

    An experimental study of low Reynolds number drag on laboratory models of dendrite fragments has been conducted. The terminal velocities of the dendrites undergoing free fall along their axis of symmetry were measured in a large Stokes flow facility. Corrections for wall interference give nearly linear drag vs Reynolds number curves. Corrections for both wall interference and inertia effects show that the dendrite Stokes settling velocities are always less than that of a sphere of equal mass and volume. In the Stokes limit, the settling speed ratio is found to correlate well with primary dendrite arm aspect ratio and a second dimensionless shape paremeter which serves as a measure of the fractal-like nature of the dendrite models. These results can be used to estimate equiaxed grain velocities and distance of travel in metal castings. The drag measurements may be used in numerical codes to calculate the movement of grains in a convecting melt in an effort to determine macrosegregation patterns caused by the sink/float mechanism.

  1. A statistical approach to identify, monitor, and manage incomplete curated data sets.

    PubMed

    Howe, Douglas G

    2018-04-02

    Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to

  2. Kidney disease models: tools to identify mechanisms and potential therapeutic targets

    PubMed Central

    Bao, Yin-Wu; Yuan, Yuan; Chen, Jiang-Hua; Lin, Wei-Qiang

    2018-01-01

    Acute kidney injury (AKI) and chronic kidney disease (CKD) are worldwide public health problems affecting millions of people and have rapidly increased in prevalence in recent years. Due to the multiple causes of renal failure, many animal models have been developed to advance our understanding of human nephropathy. Among these experimental models, rodents have been extensively used to enable mechanistic understanding of kidney disease induction and progression, as well as to identify potential targets for therapy. In this review, we discuss AKI models induced by surgical operation and drugs or toxins, as well as a variety of CKD models (mainly genetically modified mouse models). Results from recent and ongoing clinical trials and conceptual advances derived from animal models are also explored. PMID:29515089

  3. Identifying Multiple Levels of Discussion-Based Teaching Strategies for Constructing Scientific Models

    ERIC Educational Resources Information Center

    Williams, Grant; Clement, John

    2015-01-01

    This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…

  4. Beyond the Mental Number Line: A Neural Network Model of Number-Space Interactions

    ERIC Educational Resources Information Center

    Chen, Qi; Verguts, Tom

    2010-01-01

    It is commonly assumed that there is an interaction between the representations of number and space (e.g., [Dehaene et al., 1993] and [Walsh, 2003]), typically ascribed to a mental number line. The exact nature of this interaction has remained elusive, however. Here we propose that spatial aspects are not inherent to number representations, but…

  5. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  6. Modeling Users' Activity on Twitter Networks: Validation of Dunbar's Number

    PubMed Central

    Gonçalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2011-01-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior. PMID:21826200

  7. Next-Generation Sequencing-based genomic profiling of brain metastases of primary ovarian cancer identifies high number of BRCA-mutations.

    PubMed

    Balendran, S; Liebmann-Reindl, S; Berghoff, A S; Reischer, T; Popitsch, N; Geier, C B; Kenner, L; Birner, P; Streubel, B; Preusser, M

    2017-07-01

    Ovarian cancer represents the most common gynaecological malignancy and has the highest mortality of all female reproductive cancers. It has a rare predilection to develop brain metastases (BM). In this study, we evaluated the mutational profile of ovarian cancer metastases through Next-Generation Sequencing (NGS) with the aim of identifying potential clinically actionable genetic alterations with options for small molecule targeted therapy. Library preparation was conducted using Illumina TruSight Rapid Capture Kit in combination with a cancer specific enrichment kit covering 94 genes. BRCA-mutations were confirmed by using TruSeq Custom Amplicon Low Input Kit in combination with a custom-designed BRCA gene panel. In our cohort all eight sequenced BM samples exhibited a multitude of variant alterations, each with unique molecular profiles. The 37 identified variants were distributed over 22 cancer-related genes (23.4%). The number of mutated genes per sample ranged from 3 to 7 with a median of 4.5. The most commonly altered genes were BRCA1/2, TP53, and ATM. In total, 7 out of 8 samples revealed either a BRCA1 or a BRCA2 pathogenic mutation. Furthermore, all eight BM samples showed mutations in at least one DNA repair gene. Our NGS study of BM of ovarian carcinoma revealed a significant number of BRCA-mutations beside TP53, ATM and CHEK2 mutations. These findings strongly suggest the implication of BRCA and DNA repair malfunction in ovarian cancer metastasizing to the brain. Based on these findings, pharmacological PARP inhibition could be one potential targeted therapeutic for brain metastatic ovarian cancer patients.

  8. Report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.N.

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in thismore » publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.« less

  9. Identifying optimum performance trade-offs using a cognitively bounded rational analysis model of discretionary task interleaving.

    PubMed

    Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew

    2011-01-01

    We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.

  10. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  11. Rapid, non-invasive imaging of alphaviral brain infection: reducing animal numbers and morbidity to identify efficacy of potential vaccines and antivirals.

    PubMed

    Patterson, Michael; Poussard, Allison; Taylor, Katherine; Seregin, Alexey; Smith, Jeanon; Peng, Bi-Hung; Walker, Aida; Linde, Jenna; Smith, Jennifer; Salazar, Milagros; Paessler, Slobodan

    2011-11-21

    Rapid and accurate identification of disease progression are key factors in testing novel vaccines and antivirals against encephalitic alphaviruses. Typical efficacy studies utilize a large number of animals and severe morbidity or mortality as an endpoint. New technologies provide a means to reduce and refine the animal use as proposed in Hume's 3Rs (replacement, reduction, refinement) described by Russel and Burch. In vivo imaging systems (IVIS) and bioluminescent enzyme technologies accomplish the reduction of animal requirements while shortening the experimental time and improving the accuracy in localizing active virus replication. In the case of murine models of viral encephalitis in which central nervous system (CNS) viral invasion occurs rapidly but the disease development is relatively slow, we visualized the initial brain infection and enhance the data collection process required for efficacy studies on antivirals or vaccines that are aimed at preventing brain infection. Accordingly, we infected mice through intranasal inoculation with the genetically modified pathogen, Venezuelan equine encephalitis, which expresses a luciferase gene. In this study, we were able to identify the invasion of the CNS at least 3 days before any clinical signs of disease, allowing for reduction of animal morbidity providing a humane means of disease and vaccine research while obtaining scientific data accurately and more rapidly. Based on our data from the imaging model, we confirmed the usefulness of this technology in preclinical research by demonstrating the efficacy of Ampligen, a TLR-3 agonist, in preventing CNS invasion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Modelling the regional application of stakeholder identified land management strategies.

    NASA Astrophysics Data System (ADS)

    Irvine, B. J.; Fleskens, L.; Kirkby, M. J.

    2012-04-01

    The DESIRE project has trialled a series of sustainable land management (SLM) technologies. These technologies have been identified as being beneficial in mitigating land degradation by local stakeholders from a range of semi-arid study sites. The field results and the qualitative WOCAT technology assessment ftom across the study sites have been used to develop the adapted PESERA SLM model. This paper considers the development of the adapted PESERA SLM model and the potential for applying locally successful SLM technologies across a wider range of climatic and environmental conditions with respect to degradation risk, biomass production and the investment cost interface (PESERA/DESMICE). The integrate PESERA/DESMICE model contributes to the policy debate by providing a biophysical and socio-economic assessment of technology and policy scenarios.

  13. Hamiltonian identifiability assisted by a single-probe measurement

    NASA Astrophysics Data System (ADS)

    Sone, Akira; Cappellaro, Paola

    2017-02-01

    We study the Hamiltonian identifiability of a many-body spin-1 /2 system assisted by the measurement on a single quantum probe based on the eigensystem realization algorithm approach employed in Zhang and Sarovar, Phys. Rev. Lett. 113, 080401 (2014), 10.1103/PhysRevLett.113.080401. We demonstrate a potential application of Gröbner basis to the identifiability test of the Hamiltonian, and provide the necessary experimental resources, such as the lower bound in the number of the required sampling points, the upper bound in total required evolution time, and thus the total measurement time. Focusing on the examples of the identifiability in the spin-chain model with nearest-neighbor interaction, we classify the spin-chain Hamiltonian based on its identifiability, and provide the control protocols to engineer the nonidentifiable Hamiltonian to be an identifiable Hamiltonian.

  14. High Reynolds number analysis of flat plate and separated afterbody flow using non-linear turbulence models

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1996-01-01

    The ability of the three-dimensional Navier-Stokes method, PAB3D, to simulate the effect of Reynolds number variation using non-linear explicit algebraic Reynolds stress turbulence modeling was assessed. Subsonic flat plate boundary-layer flow parameters such as normalized velocity distributions, local and average skin friction, and shape factor were compared with DNS calculations and classical theory at various local Reynolds numbers up to 180 million. Additionally, surface pressure coefficient distributions and integrated drag predictions on an axisymmetric nozzle afterbody were compared with experimental data from 10 to 130 million Reynolds number. The high Reynolds data was obtained from the NASA Langley 0.3m Transonic Cryogenic Tunnel. There was generally good agreement of surface static pressure coefficients between the CFD and measurement. The change in pressure coefficient distributions with varying Reynolds number was similar to the experimental data trends, though slightly over-predicting the effect. The computational sensitivity of viscous modeling and turbulence modeling are shown. Integrated afterbody pressure drag was typically slightly lower than the experimental data. The change in afterbody pressure drag with Reynolds number was small both experimentally and computationally, even though the shape of the distribution was somewhat modified with Reynolds number.

  15. The relationship between trading volumes, number of transactions, and stock volatility in GARCH models

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya; Chen, Ting Ting

    2016-08-01

    We examine the relationship between trading volumes, number of transactions, and volatility using daily stock data of the Tokyo Stock Exchange. Following the mixture of distributions hypothesis, we use trading volumes and the number of transactions as proxy for the rate of information arrivals affecting stock volatility. The impact of trading volumes or number of transactions on volatility is measured using the generalized autoregressive conditional heteroscedasticity (GARCH) model. We find that the GARCH effects, that is, persistence of volatility, is not always removed by adding trading volumes or number of transactions, indicating that trading volumes and number of transactions do not adequately represent the rate of information arrivals.

  16. Examples of testing global identifiability of biological and biomedical models with the DAISY software.

    PubMed

    Saccomani, Maria Pia; Audoly, Stefania; Bellu, Giuseppina; D'Angiò, Leontina

    2010-04-01

    DAISY (Differential Algebra for Identifiability of SYstems) is a recently developed computer algebra software tool which can be used to automatically check global identifiability of (linear and) nonlinear dynamic models described by differential equations involving polynomial or rational functions. Global identifiability is a fundamental prerequisite for model identification which is important not only for biological or medical systems but also for many physical and engineering systems derived from first principles. Lack of identifiability implies that the parameter estimation techniques may not fail but any obtained numerical estimates will be meaningless. The software does not require understanding of the underlying mathematical principles and can be used by researchers in applied fields with a minimum of mathematical background. We illustrate the DAISY software by checking the a priori global identifiability of two benchmark nonlinear models taken from the literature. The analysis of these two examples includes comparison with other methods and demonstrates how identifiability analysis is simplified by this tool. Thus we illustrate the identifiability analysis of other two examples, by including discussion of some specific aspects related to the role of observability and knowledge of initial conditions in testing identifiability and to the computational complexity of the software. The main focus of this paper is not on the description of the mathematical background of the algorithm, which has been presented elsewhere, but on illustrating its use and on some of its more interesting features. DAISY is available on the web site http://www.dei.unipd.it/ approximately pia/. 2010 Elsevier Ltd. All rights reserved.

  17. Subarachnoid hemorrhage admissions retrospectively identified using a prediction model

    PubMed Central

    McIntyre, Lauralyn; Fergusson, Dean; Turgeon, Alexis; dos Santos, Marlise P.; Lum, Cheemun; Chassé, Michaël; Sinclair, John; Forster, Alan; van Walraven, Carl

    2016-01-01

    Objective: To create an accurate prediction model using variables collected in widely available health administrative data records to identify hospitalizations for primary subarachnoid hemorrhage (SAH). Methods: A previously established complete cohort of consecutive primary SAH patients was combined with a random sample of control hospitalizations. Chi-square recursive partitioning was used to derive and internally validate a model to predict the probability that a patient had primary SAH (due to aneurysm or arteriovenous malformation) using health administrative data. Results: A total of 10,322 hospitalizations with 631 having primary SAH (6.1%) were included in the study (5,122 derivation, 5,200 validation). In the validation patients, our recursive partitioning algorithm had a sensitivity of 96.5% (95% confidence interval [CI] 93.9–98.0), a specificity of 99.8% (95% CI 99.6–99.9), and a positive likelihood ratio of 483 (95% CI 254–879). In this population, patients meeting criteria for the algorithm had a probability of 45% of truly having primary SAH. Conclusions: Routinely collected health administrative data can be used to accurately identify hospitalized patients with a high probability of having a primary SAH. This algorithm may allow, upon validation, an easy and accurate method to create validated cohorts of primary SAH from either ruptured aneurysm or arteriovenous malformation. PMID:27629096

  18. Bayesian inference to identify parameters in viscoelasticity

    NASA Astrophysics Data System (ADS)

    Rappel, Hussein; Beex, Lars A. A.; Bordas, Stéphane P. A.

    2017-08-01

    This contribution discusses Bayesian inference (BI) as an approach to identify parameters in viscoelasticity. The aims are: (i) to show that the prior has a substantial influence for viscoelasticity, (ii) to show that this influence decreases for an increasing number of measurements and (iii) to show how different types of experiments influence the identified parameters and their uncertainties. The standard linear solid model is the material description of interest and a relaxation test, a constant strain-rate test and a creep test are the tensile experiments focused on. The experimental data are artificially created, allowing us to make a one-to-one comparison between the input parameters and the identified parameter values. Besides dealing with the aforementioned issues, we believe that this contribution forms a comprehensible start for those interested in applying BI in viscoelasticity.

  19. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    NASA Astrophysics Data System (ADS)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  20. Number of Siblings and Intellectual Development: The Resource Dilution Explanation.

    ERIC Educational Resources Information Center

    Downey, Douglas B.

    2001-01-01

    Resource dilution model suggests that as the number of children increases, parental resources for each child decline. Assesses whether resource dilution could explain the effect of siblings on intellectual development tests. Identifies flaws in recent critiques of this position, discussing it as an explanation for why children with few siblings…

  1. Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes

    NASA Astrophysics Data System (ADS)

    Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin

    2018-06-01

    In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.

  2. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  3. Interpretation of clinical relevance of X-chromosome copy number variations identified in a large cohort of individuals with cognitive disorders and/or congenital anomalies.

    PubMed

    Willemsen, Marjolein H; de Leeuw, Nicole; de Brouwer, Arjan P M; Pfundt, Rolph; Hehir-Kwa, Jayne Y; Yntema, Helger G; Nillesen, Willy M; de Vries, Bert B A; van Bokhoven, Hans; Kleefstra, Tjitske

    2012-11-01

    Genome-wide array studies are now routinely being used in the evaluation of patients with cognitive disorders (CD) and/or congenital anomalies (CA). Therefore, inevitably each clinician is confronted with the challenging task of the interpretation of copy number variations detected by genome-wide array platforms in a diagnostic setting. Clinical interpretation of autosomal copy number variations is already challenging, but assessment of the clinical relevance of copy number variations of the X-chromosome is even more complex. This study provides an overview of the X-Chromosome copy number variations that we have identified by genome-wide array analysis in a large cohort of 4407 male and female patients. We have made an interpretation of the clinical relevance of each of these copy number variations based on well-defined criteria and previous reports in literature and databases. The prevalence of X-chromosome copy number variations in this cohort was 57/4407 (∼1.3%), of which 15 (0.3%) were interpreted as (likely) pathogenic. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  4. Identifying Breast Cancer Oncogenes

    DTIC Science & Technology

    2010-10-01

    08-1-0767 TITLE: Identifying Breast Cancer Oncogenes PRINCIPAL INVESTIGATOR: Yashaswi Shrestha... Breast Cancer Oncogenes 5a. CONTRACT NUMBER W81XWH-08-1-0767 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Yashaswi...SUPPLEMENTARY NOTES 14. ABSTRACT Breast cancer is attributed to genetic alterations, the majority of which are yet to be characterized. Oncogenic

  5. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    PubMed

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  6. Genetic analysis of multi-environmental spring wheat trials identifies genomic regions for locus-specific trade-offs for grain weight and grain number.

    PubMed

    Sukumaran, Sivakumar; Lopes, Marta; Dreisigacker, Susanne; Reynolds, Matthew

    2018-04-01

    GWAS on multi-environment data identified genomic regions associated with trade-offs for grain weight and grain number. Grain yield (GY) can be dissected into its components thousand grain weight (TGW) and grain number (GN), but little has been achieved in assessing the trade-off between them in spring wheat. In the present study, the Wheat Association Mapping Initiative (WAMI) panel of 287 elite spring bread wheat lines was phenotyped for GY, GN, and TGW in ten environments across different wheat growing regions in Mexico, South Asia, and North Africa. The panel genotyped with the 90 K Illumina Infinitum SNP array resulted in 26,814 SNPs for genome-wide association study (GWAS). Statistical analysis of the multi-environmental data for GY, GN, and TGW observed repeatability estimates of 0.76, 0.62, and 0.95, respectively. GWAS on BLUPs of combined environment analysis identified 38 loci associated with the traits. Among them four loci-6A (85 cM), 5A (98 cM), 3B (99 cM), and 2B (96 cM)-were associated with multiple traits. The study identified two loci that showed positive association between GY and TGW, with allelic substitution effects of 4% (GY) and 1.7% (TGW) for 6A locus and 0.2% (GY) and 7.2% (TGW) for 2B locus. The locus in chromosome 6A (79-85 cM) harbored a gene TaGW2-6A. We also identified that a combination of markers associated with GY, TGW, and GN together explained higher variation for GY (32%), than the markers associated with GY alone (27%). The marker-trait associations from the present study can be used for marker-assisted selection (MAS) and to discover the underlying genes for these traits in spring wheat.

  7. Challenges in identifying sites climatically matched to the native ranges of animal invaders.

    PubMed

    Rodda, Gordon H; Jarnevich, Catherine S; Reed, Robert N

    2011-02-09

    Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these statistical problems, may produce more robust models of invasion risk.

  8. Challenges in Identifying Sites Climatically Matched to the Native Ranges of Animal Invaders

    PubMed Central

    Rodda, Gordon H.; Jarnevich, Catherine S.; Reed, Robert N.

    2011-01-01

    Background Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these

  9. Challenges in identifying sites climatically matched to the native ranges of animal invaders

    USGS Publications Warehouse

    Rodda, G.H.; Jarnevich, C.S.; Reed, R.N.

    2011-01-01

    Background: Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings: Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance: When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these

  10. Using Genetic Buffering Relationships Identified in Fission Yeast to Elucidate the Molecular Pathology of Tuberous Sclerosis

    DTIC Science & Technology

    2015-07-01

    AWARD NUMBER: W81XWH-14-1-0169 TITLE: Using Genetic Buffering Relationships Identified in Fission Yeast to Elucidate the Molecular Pathology of...DATES COVERED 1 July 2014 - 30 June 2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Using Genetic Buffering Relationships Identified in Fission Yeast ...SUPPLEMENTARY NOTES 14. ABSTRACT Using the genetically tractable fission yeast as a model, we sought to exploit recent advances in gene interaction

  11. Developing a Learning Progression for Number Sense Based on the Rule Space Model in China

    ERIC Educational Resources Information Center

    Chen, Fu; Yan, Yue; Xin, Tao

    2017-01-01

    The current study focuses on developing the learning progression of number sense for primary school students, and it applies a cognitive diagnostic model, the rule space model, to data analysis. The rule space model analysis firstly extracted nine cognitive attributes and their hierarchy model from the analysis of previous research and the…

  12. Determination of critical nucleation number for a single nucleation amyloid-β aggregation model.

    PubMed

    Ghosh, Preetam; Vaidya, Ashwin; Kumar, Amit; Rangachari, Vijayaraghavan

    2016-03-01

    Aggregates of amyloid-β (Aβ) peptide are known to be the key pathological agents in Alzheimer disease (AD). Aβ aggregates to form large, insoluble fibrils that deposit as senile plaques in AD brains. The process of aggregation is nucleation-dependent in which the formation of a nucleus is the rate-limiting step, and controls the physiochemical fate of the aggregates formed. Therefore, understanding the properties of nucleus and pre-nucleation events will be significant in reducing the existing knowledge-gap in AD pathogenesis. In this report, we have determined the plausible range of critical nucleation number (n(*)), the number of monomers associated within the nucleus for a homogenous aggregation model with single unique nucleation event, by two independent methods: A reduced-order stability analysis and ordinary differential equation based numerical analysis, supported by experimental biophysics. The results establish that the most likely range of n(*) is between 7 and 14 and within, this range, n(*) = 12 closely supports the experimental data. These numbers are in agreement with those previously reported, and importantly, the report establishes a new modeling framework using two independent approaches towards a convergent solution in modeling complex aggregation reactions. Our model also suggests that the formation of large protofibrils is dependent on the nature of n(*), further supporting the idea that pre-nucleation events are significant in controlling the fate of larger aggregates formed. This report has re-opened an old problem with a new perspective and holds promise towards revealing the molecular events in amyloid pathologies in the future. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Elliptic flow computation by low Reynolds number two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Shih, T.-H.

    1991-01-01

    A detailed comparison of ten low-Reynolds-number k-epsilon models is carried out. The flow solver, based on an implicit approximate factorization method, is designed for incompressible, steady two-dimensional flows. The conservation of mass is enforced by the artificial compressibility approach and the computational domain is discretized using centered finite differences. The turbulence model predictions of the flow past a hill are compared with experiments at Re = 10 exp 6. The effects of the grid spacing together with the numerical efficiency of the various formulations are investigated. The results show that the models provide a satisfactory prediction of the flow field in the presence of a favorable pressure gradient, while the accuracy rapidly deteriorates when a strong adverse pressure gradient is encountered. A newly proposed model form that does not explicitly depend on the wall distance seems promising for application to complex geometries.

  14. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    ERIC Educational Resources Information Center

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  15. Models for Rational Number Bases

    ERIC Educational Resources Information Center

    Pedersen, Jean J.; Armbruster, Frank O.

    1975-01-01

    This article extends number bases to negative integers, then to positive rationals and finally to negative rationals. Methods and rules for operations in positive and negative rational bases greater than one or less than negative one are summarized in tables. Sample problems are explained and illustrated. (KM)

  16. Communicating about quantity without a language model: number devices in homesign grammar.

    PubMed

    Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan

    2013-01-01

    All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Communicating about quantity without a language model: Number devices in homesign grammar

    PubMed Central

    Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan

    2013-01-01

    All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners’ hearing communication partners displayed some, but not all, of the homesigners’ linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners’ gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners’ linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. PMID:23872365

  18. Comparison of GWAS models to identify non-additive genetic control of flowering time in sunflower hybrids.

    PubMed

    Bonnafous, Fanny; Fievet, Ghislain; Blanchet, Nicolas; Boniface, Marie-Claude; Carrère, Sébastien; Gouzy, Jérôme; Legrand, Ludovic; Marage, Gwenola; Bret-Mestries, Emmanuelle; Munos, Stéphane; Pouilly, Nicolas; Vincourt, Patrick; Langlade, Nicolas; Mangin, Brigitte

    2018-02-01

    This study compares five models of GWAS, to show the added value of non-additive modeling of allelic effects to identify genomic regions controlling flowering time of sunflower hybrids. Genome-wide association studies are a powerful and widely used tool to decipher the genetic control of complex traits. One of the main challenges for hybrid crops, such as maize or sunflower, is to model the hybrid vigor in the linear mixed models, considering the relatedness between individuals. Here, we compared two additive and three non-additive association models for their ability to identify genomic regions associated with flowering time in sunflower hybrids. A panel of 452 sunflower hybrids, corresponding to incomplete crossing between 36 male lines and 36 female lines, was phenotyped in five environments and genotyped for 2,204,423 SNPs. Intra-locus effects were estimated in multi-locus models to detect genomic regions associated with flowering time using the different models. Thirteen quantitative trait loci were identified in total, two with both model categories and one with only non-additive models. A quantitative trait loci on LG09, detected by both the additive and non-additive models, is located near a GAI homolog and is presented in detail. Overall, this study shows the added value of non-additive modeling of allelic effects for identifying genomic regions that control traits of interest and that could participate in the heterosis observed in hybrids.

  19. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    NASA Astrophysics Data System (ADS)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  20. Force test of a 0.88 percent scale 142-inch diameter solid rocket booster (MSFC model number 461) in the NASA/MSFC high Reynolds number wind tunnel (SA13F)

    NASA Technical Reports Server (NTRS)

    Johnson, J. D.; Winkler, G. W.

    1976-01-01

    The results are presented of a force test of a .88 percent scale model of the 142 inch solid rocket booster without protuberances, conducted in the MSFC high Reynolds number wind tunnel. The objective of this test was to obtain aerodynamic force data over a large range of Reynolds numbers. The test was conducted over a Mach number range from 0.4 to 3.5. Reynolds numbers based on model diameter (1.25 inches) ranged from .75 million to 13.5 million. The angle of attack range was from 35 to 145 degrees.

  1. Development of computational fluid dynamics--habitat suitability (CFD-HSI) models to identify potential passage--Challenge zones for migratory fishes in the Penobscot River

    USGS Publications Warehouse

    Haro, Alexander J.; Dudley, Robert W.; Chelminski, Michael

    2012-01-01

    A two-dimensional computational fluid dynamics-habitat suitability (CFD–HSI) model was developed to identify potential zones of shallow depth and high water velocity that may present passage challenges for five anadromous fish species in the Penobscot River, Maine, upstream from two existing dams and as a result of the proposed future removal of the dams. Potential depth-challenge zones were predicted for larger species at the lowest flow modeled in the dam-removal scenario. Increasing flows under both scenarios increased the number and size of potential velocity-challenge zones, especially for smaller species. This application of the two-dimensional CFD–HSI model demonstrated its capabilities to estimate the potential effects of flow and hydraulic alteration on the passage of migratory fish.

  2. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Numbers

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.; Gaffney, R. L., Jr.

    2007-01-01

    A new turbulence model suited for calculating the turbulent Prandtl number as part of the solution is presented. The model is based on a set of two equations: one governing the variance of the enthalpy and the other governing its dissipation rate. These equations were derived from the exact energy equation and thus take into consideration compressibility and dissipation terms. The model is used to study two cases involving shock wave/boundary layer interaction at Mach 9.22 and Mach 5.0. In general, heat transfer prediction showed great improvement over traditional turbulence models where the turbulent Prandtl number is assumed constant. It is concluded that using a model that calculates the turbulent Prandtl number as part of the solution is the key to bridging the gap between theory and experiment for flows dominated by shock wave/boundary layer interactions.

  3. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Numbers

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L., Jr.; Xiao, X.; Edwards, J. R.; Hassan, H. A.

    2005-01-01

    A new turbulence model suited for calculating the turbulent Prandtl number as part of the solution is presented. The model is based on a set of two equations: one governing the variance of the enthalpy and the other governing its dissipation rate. These equations were derived from the exact energy equation and thus take into consideration compressibility and dissipation terms. The model is used to study two cases involving shock wave/boundary layer interaction at Mach 9.22 and Mach 5.0. In general, heat transfer prediction showed great improvement over traditional turbulence models where the turbulent Prandtl number is assumed constant. It is concluded that using a model that calculates the turbulent Prandtl number as part of the solution is the key to bridging the gap between theory and experiment for flows dominated by shock wave/boundary layer interactions.

  4. On Finding and Using Identifiable Parameter Combinations in Nonlinear Dynamic Systems Biology Models and COMBOS: A Novel Web Implementation

    PubMed Central

    DiStefano, Joseph

    2014-01-01

    Parameter identifiability problems can plague biomodelers when they reach the quantification stage of development, even for relatively simple models. Structural identifiability (SI) is the primary question, usually understood as knowing which of P unknown biomodel parameters p 1,…, pi,…, pP are-and which are not-quantifiable in principle from particular input-output (I-O) biodata. It is not widely appreciated that the same database also can provide quantitative information about the structurally unidentifiable (not quantifiable) subset, in the form of explicit algebraic relationships among unidentifiable pi. Importantly, this is a first step toward finding what else is needed to quantify particular unidentifiable parameters of interest from new I–O experiments. We further develop, implement and exemplify novel algorithms that address and solve the SI problem for a practical class of ordinary differential equation (ODE) systems biology models, as a user-friendly and universally-accessible web application (app)–COMBOS. Users provide the structural ODE and output measurement models in one of two standard forms to a remote server via their web browser. COMBOS provides a list of uniquely and non-uniquely SI model parameters, and–importantly-the combinations of parameters not individually SI. If non-uniquely SI, it also provides the maximum number of different solutions, with important practical implications. The behind-the-scenes symbolic differential algebra algorithms are based on computing Gröbner bases of model attributes established after some algebraic transformations, using the computer-algebra system Maxima. COMBOS was developed for facile instructional and research use as well as modeling. We use it in the classroom to illustrate SI analysis; and have simplified complex models of tumor suppressor p53 and hormone regulation, based on explicit computation of parameter combinations. It’s illustrated and validated here for models of moderate complexity

  5. Using neutral models to identify constraints on low-severity fire regimes.

    Treesearch

    Donald McKenzie; Amy E. Hessl; Lara-Karena B. Kellogg

    2006-01-01

    Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire is modeled as a stochastic process, for which each fire history is only one realization, a simulation approach is necessary to understand baseline variability, thereby identifying constraints, or forcing functions, that affect fire regimes...

  6. Identifying Ghanaian Pre-Service Teachers' Readiness for Computer Use: A Technology Acceptance Model Approach

    ERIC Educational Resources Information Center

    Gyamfi, Stephen Adu

    2016-01-01

    This study extends the technology acceptance model to identify factors that influence technology acceptance among pre-service teachers in Ghana. Data from 380 usable questionnaires were tested against the research model. Utilising the extended technology acceptance model (TAM) as a research framework, the study found that: pre-service teachers'…

  7. Regression Models for Identifying Noise Sources in Magnetic Resonance Images

    PubMed Central

    Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.

    2009-01-01

    Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478

  8. A Novel Wake Oscillator Model for Vortex-Induced Vibrations Prediction of A Cylinder Considering the Influence of Reynolds Number

    NASA Astrophysics Data System (ADS)

    Gao, Xi-feng; Xie, Wu-de; Xu, Wan-hai; Bai, Yu-chuan; Zhu, Hai-tao

    2018-04-01

    It is well known that the Reynolds number has a significant effect on the vortex-induced vibrations (VIV) of cylinders. In this paper, a novel in-line (IL) and cross-flow (CF) coupling VIV prediction model for circular cylinders has been proposed, in which the influence of the Reynolds number was comprehensively considered. The Strouhal number linked with the vortex shedding frequency was calculated through a function of the Reynolds number. The coefficient of the mean drag force was fitted as a new piecewise function of the Reynolds number, and its amplification resulted from the CF VIV was also taken into account. The oscillating drag and lift forces were modelled with classical van der Pol wake oscillators and their empirical parameters were determined based on the lock-in boundaries and the peak-amplitude formulas. A new peak-amplitude formula for the IL VIV was developed under the resonance condition with respect to the mass-damping ratio and the Reynolds number. When compared with the results from the experiments and some other prediction models, the present model could give good estimations on the vibration amplitudes and frequencies of the VIV both for elastically-mounted rigid and long flexible cylinders. The present model considering the influence of the Reynolds number could generally provide better results than that neglecting the effect of the Reynolds number.

  9. Consequential Validity Impact of Choosing Different Aptitude-Achievement Discrepancy Models in Identifying Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Glasnapp, Douglas R.; Poggio, John P.

    This study used computer simulation to provide information on the percentage of students with learning disabilities expected to be identified under different aptitude-achievement discrepancy eligibility models and criteria and to demonstrate the consequential effects in terms of the extent to which the different models identify students of…

  10. Computational modeling identifies key gene regulatory interactions underlying phenobarbital-mediated tumor promotion

    PubMed Central

    Luisier, Raphaëlle; Unterberger, Elif B.; Goodman, Jay I.; Schwarz, Michael; Moggs, Jonathan; Terranova, Rémi; van Nimwegen, Erik

    2014-01-01

    Gene regulatory interactions underlying the early stages of non-genotoxic carcinogenesis are poorly understood. Here, we have identified key candidate regulators of phenobarbital (PB)-mediated mouse liver tumorigenesis, a well-characterized model of non-genotoxic carcinogenesis, by applying a new computational modeling approach to a comprehensive collection of in vivo gene expression studies. We have combined our previously developed motif activity response analysis (MARA), which models gene expression patterns in terms of computationally predicted transcription factor binding sites with singular value decomposition (SVD) of the inferred motif activities, to disentangle the roles that different transcriptional regulators play in specific biological pathways of tumor promotion. Furthermore, transgenic mouse models enabled us to identify which of these regulatory activities was downstream of constitutive androstane receptor and β-catenin signaling, both crucial components of PB-mediated liver tumorigenesis. We propose novel roles for E2F and ZFP161 in PB-mediated hepatocyte proliferation and suggest that PB-mediated suppression of ESR1 activity contributes to the development of a tumor-prone environment. Our study shows that combining MARA with SVD allows for automated identification of independent transcription regulatory programs within a complex in vivo tissue environment and provides novel mechanistic insights into PB-mediated hepatocarcinogenesis. PMID:24464994

  11. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  12. Major revision of sunspot number: implication for the ionosphere models

    NASA Astrophysics Data System (ADS)

    Gulyaeva, Tamara

    2016-07-01

    Recently on 1st July, 2015, a major revision of the historical sunspot number series has been carried out as discussed in [Clette et al., Revisiting the Sunspot Number. A 400-Year Perspective on the Solar Cycle, Space Science Reviews, 186, Issue 1-4, pp. 35-103, 2014). The revised SSN2.0 dataset is provided along with the former SSN1.0 data at http://sidc.oma.be/silso/. The SSN2.0 values exceed the former conventional SSN1.0 data so that new SSNs are greater in many cases than the solar radio flux F10.7 values which pose a problem of SSN2.0 implementation as a driver of the International Reference Ionosphere, IRI, its extension to plasmasphere, IRI-Plas, NeQuick model, Russian Standard Ionosphere, SMI. In particular, the monthly predictions of the F2 layer peak are based on input of the ITU-R (former CCIR) and URSI maps. The CCIR and URSI maps coefficients are available for each month of the year, and for two levels of solar activity: low (SSN = 0) and high (SSN = 100). SSN is the monthly smoothed sunspot number from the SSN1.0 data set used as an index of the level of solar activity. For every SSN different from 0 or 100 the critical frequency foF2 and the M3000F2 radio propagation factor used for the peak height hmF2 production may be evaluated by an interpolation. The ionospheric proxies of the solar activity IG12 index or Global Electron Content GEC12 index, driving the ionospheric models, are also calibrated with the former SSN1.0 data. The paper presents a solar proxy intended to calibrate SSN2.0 data set to fit F10.7 solar radio flux and/or SSN1.0 data series. This study is partly supported by TUBITAK EEEAG 115E915.

  13. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    USGS Publications Warehouse

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  14. 78 FR 26244 - Updating of Employer Identification Numbers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-06

    ... (including updated application information regarding the name and taxpayer identifying number of the... require these persons to update application information regarding the name and taxpayer identifying number..., Application for Employer Identification Number, requires entities to disclose the name of the EIN applicant's...

  15. Postural transitions during activities of daily living could identify frailty status – Application of wearable technology to identify frailty during unsupervised condition

    PubMed Central

    Parvaneh, Saman; Mohler, Jane; Toosizadeh, Nima; Grewal, Gurtej Singh; Najafi, Bijan

    2017-01-01

    Background Impairment of physical function is a major indicator of frailty. Functional performance tests have been shown to be useful for identification of frailty in older adults. However, these tests are often not translatable into unsupervised and remote monitoring of frailty status at home and/or community settings. Objective In this study, we explored daily postural transition quantified using a chest-worn wearable technology to identify frailty in community-dwelling older adults. Methods Spontaneous daily physical activity was monitored over 24 hours in 120 community dwelling (age: 78±8 years) using an unobtrusive wearable sensor (PAMSys™, Biosensics LLC). Participants were classified as non-frail and pre-frail/frail using Fried’s criteria. A validated software was used to identify body postures and postural transition between each independent postural activities such as sit-to-stand, stand-to-sit, stand-to-walk, and walk-to-stand. Transition from walking to sitting was further classified as quick-sitting and cautious-sitting based on presence/absence of a standing-posture pause between sitting and walking. General linear model univariate test was used for between groups comparison. Pearson’s correlation was used to determine the association between sensor-derived parameters with age. Logistic regression model was used to identify independent predictors of frailty. Results According to Fried’s criteria, 63% of participants were pre-frail/frail. The total number of postural transitions, stand-to-walk, and walk-to-stand were, respectively, 25.2%, 30.2%, and 30.6% lower in the pre-frail/frail group when compared to non-frails (p<0.05, Cohen’s d=0.73–0.79). Furthermore, ratio of cautious-sitting was significantly higher by 6.2% in pre-frail/frail compared to non-frail (p=0.025, Cohen’s d=0.22). Total number of postural transitions and ratio of cautious-sitting also showed significant negative and positive correlations with age, respectively (r=-0

  16. Universal characteristics of fractal fluctuations in prime number distribution

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2014-11-01

    The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.

  17. A data-driven model for estimating industry average numbers of hospital security staff.

    PubMed

    Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M

    2015-01-01

    In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.

  18. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  19. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  20. Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

    PubMed Central

    Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.

    2010-01-01

    We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451

  1. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    PubMed

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

  2. Identifying At-Risk Employees: Modeling Psychosocial Precursors of Potential Insider Threats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.

    In many insider crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they can be assessed. A psychosocial model was developed to assess an employee's behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. To test the model's agreement with human resourcesmore » and management professionals, we conducted an experiment with positive results. If implemented in an operational setting, the model would be part of a set of management tools for employee assessment to identify employees who pose a greater insider threat.« less

  3. Indistinguishability and identifiability of kinetic models for the MurC reaction in peptidoglycan biosynthesis.

    PubMed

    Hattersley, J G; Pérez-Velázquez, J; Chappell, M J; Bearup, D; Roper, D; Dowson, C; Bugg, T; Evans, N D

    2011-11-01

    An important question in Systems Biology is the design of experiments that enable discrimination between two (or more) competing chemical pathway models or biological mechanisms. In this paper analysis is performed between two different models describing the kinetic mechanism of a three-substrate three-product reaction, namely the MurC reaction in the cytoplasmic phase of peptidoglycan biosynthesis. One model involves ordered substrate binding and ordered release of the three products; the competing model also assumes ordered substrate binding, but with fast release of the three products. The two versions are shown to be distinguishable; however, if standard quasi-steady-state assumptions are made distinguishability cannot be determined. Once model structure uniqueness is ensured the experimenter must determine if it is possible to successfully recover rate constant values given the experiment observations, a process known as structural identifiability. Structural identifiability analysis is carried out for both models to determine which of the unknown reaction parameters can be determined uniquely, or otherwise, from the ideal system outputs. This structural analysis forms an integrated step towards the modelling of the full pathway of the cytoplasmic phase of peptidoglycan biosynthesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Using Hierarchical Cluster Models to Systematically Identify Groups of Jobs With Similar Occupational Questionnaire Response Patterns to Assist Rule-Based Expert Exposure Assessment in Population-Based Studies

    PubMed Central

    Friesen, Melissa C.; Shortreed, Susan M.; Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Silverman, Debra T.; Yu, Kai

    2015-01-01

    Objectives: Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Methods: Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m−3 respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters’ homogeneity (defined as >75% with the same estimate

  5. Identifying models of delivery, care domains and quality indicators relevant to palliative day services: a scoping review protocol.

    PubMed

    O'Connor, Seán R; Dempster, Martin; McCorry, Noleen K

    2017-05-16

    With an ageing population and increasing numbers of people with life-limiting illness, there is a growing demand for palliative day services. There is a need to measure and demonstrate the quality of these services, but there is currently little agreement on which aspects of care should be used to do this. The aim of the scoping review will be to map the extent, range and nature of the evidence around models of delivery, care domains and existing quality indicators used to evaluate palliative day services. Electronic databases (MEDLINE, EMBASE, CINAHL, PsycINFO, Cochrane Central Register of Controlled Trials) will be searched for evidence using consensus development methods; randomised or quasi-randomised controlled trials; mixed methods; and prospective, longitudinal or retrospective case-control studies to develop or test quality indicators for evaluating palliative care within non-residential settings, including day hospices and community or primary care settings. At least two researchers will independently conduct all searches, study selection and data abstraction procedures. Meta-analyses and statistical methods of synthesis are not planned as part of the review. Results will be reported using numerical counts, including number of indicators in each care domain and by using qualitative approach to describe important indicator characteristics. A conceptual model will also be developed to summarise the impact of different aspects of quality in a palliative day service context. Methodological quality relating to indicator development will be assessed using the Appraisal of Indicators through Research and Evaluation (AIRE) tool. Overall strength of evidence will be assessed using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system. Final decisions on quality assessment will be made via consensus between review authors. Identifying, developing and implementing evidence-based quality indicators is critical to the evaluation and

  6. GIS model for identifying urban areas vulnerable to noise pollution: case study

    NASA Astrophysics Data System (ADS)

    Bilaşco, Ştefan; Govor, Corina; Roşca, Sanda; Vescan, Iuliu; Filip, Sorin; Fodorean, Ioan

    2017-04-01

    The unprecedented expansion of the national car ownership over the last few years has been determined by economic growth and the need for the population and economic agents to reduce travel time in progressively expanding large urban centres. This has led to an increase in the level of road noise and a stronger impact on the quality of the environment. Noise pollution generated by means of transport represents one of the most important types of pollution with negative effects on a population's health in large urban areas. As a consequence, tolerable limits of sound intensity for the comfort of inhabitants have been determined worldwide and the generation of sound maps has been made compulsory in order to identify the vulnerable zones and to make recommendations how to decrease the negative impact on humans. In this context, the present study aims at presenting a GIS spatial analysis model-based methodology for identifying and mapping zones vulnerable to noise pollution. The developed GIS model is based on the analysis of all the components influencing sound propagation, represented as vector databases (points of sound intensity measurements, buildings, lands use, transport infrastructure), raster databases (DEM), and numerical databases (wind direction and speed, sound intensity). Secondly, the hourly changes (for representative hours) were analysed to identify the hotspots characterised by major traffic flows specific to rush hours. The validated results of the model are represented by GIS databases and useful maps for the local public administration to use as a source of information and in the process of making decisions.

  7. Identifying arbitrary parameter zonation using multiple level set functions

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan

    2018-07-01

    In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.

  8. Model-Observation Comparisons of Electron Number Densities in the Coma of 67P/Churyumov-Gerasimenko during January 2015

    NASA Astrophysics Data System (ADS)

    Vigren, E.; Altwegg, K.; Edberg, N. J. T.; Eriksson, A. I.; Galand, M.; Henri, P.; Johansson, F.; Odelstad, E.; Tzou, C.-Y.; Valliéres, X.

    2016-09-01

    During 2015 January 9-11, at a heliocentric distance of ˜2.58-2.57 au, the ESA Rosetta spacecraft resided at a cometocentric distance of ˜28 km from the nucleus of comet 67P/Churyumov-Gerasimenko, sweeping the terminator at northern latitudes of 43°N-58°N. Measurements by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/Comet Pressure Sensor (ROSINA/COPS) provided neutral number densities. We have computed modeled electron number densities using the neutral number densities as input into a Field Free Chemistry Free model, assuming H2O dominance and ion-electron pair formation by photoionization only. A good agreement (typically within 25%) is found between the modeled electron number densities and those observed from measurements by the Mutual Impedance Probe (RPC/MIP) and the Langmuir Probe (RPC/LAP), both being subsystems of the Rosetta Plasma Consortium. This indicates that ions along the nucleus-spacecraft line were strongly coupled to the neutrals, moving radially outward with about the same speed. Such a statement, we propose, can be further tested by observations of H3O+/H2O+ number density ratios and associated comparisons with model results.

  9. Sequence-based predictive modeling to identify cancerlectins

    PubMed Central

    Lai, Hong-Yan; Chen, Xin-Xin; Chen, Wei; Tang, Hua; Lin, Hao

    2017-01-01

    Lectins are a diverse type of glycoproteins or carbohydrate-binding proteins that have a wide distribution to various species. They can specially identify and exclusively bind to a certain kind of saccharide groups. Cancerlectins are a group of lectins that are closely related to cancer and play a major role in the initiation, survival, growth, metastasis and spread of tumor. Several computational methods have emerged to discriminate cancerlectins from non-cancerlectins, which promote the study on pathogenic mechanisms and clinical treatment of cancer. However, the predictive accuracies of most of these techniques are very limited. In this work, by constructing a benchmark dataset based on the CancerLectinDB database, a new amino acid sequence-based strategy for feature description was developed, and then the binomial distribution was applied to screen the optimal feature set. Ultimately, an SVM-based predictor was performed to distinguish cancerlectins from non-cancerlectins, and achieved an accuracy of 77.48% with AUC of 85.52% in jackknife cross-validation. The results revealed that our prediction model could perform better comparing with published predictive tools. PMID:28423655

  10. Pharmacophore modeling and virtual screening to identify potential RET kinase inhibitors.

    PubMed

    Shih, Kuei-Chung; Shiau, Chung-Wai; Chen, Ting-Shou; Ko, Ching-Huai; Lin, Chih-Lung; Lin, Chun-Yuan; Hwang, Chrong-Shiong; Tang, Chuan-Yi; Chen, Wan-Ru; Huang, Jui-Wen

    2011-08-01

    Chemical features based 3D pharmacophore model for REarranged during Transfection (RET) tyrosine kinase were developed by using a training set of 26 structurally diverse known RET inhibitors. The best pharmacophore hypothesis, which identified inhibitors with an associated correlation coefficient of 0.90 between their experimental and estimated anti-RET values, contained one hydrogen-bond acceptor, one hydrogen-bond donor, one hydrophobic, and one ring aromatic features. The model was further validated by a testing set, Fischer's randomization test, and goodness of hit (GH) test. We applied this pharmacophore model to screen NCI database for potential RET inhibitors. The hits were docked to RET with GOLD and CDOCKER after filtering by Lipinski's rules. Ultimately, 24 molecules were selected as potential RET inhibitors for further investigation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. An application of change-point recursive models to the relationship between litter size and number of stillborns in pigs.

    PubMed

    Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L

    2010-11-01

    We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.

  12. Non-parametric wall model and methods of identifying boundary conditions for moments in gas flow equations

    NASA Astrophysics Data System (ADS)

    Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent

    2018-03-01

    In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.

  13. Comprehensive Analyses of Ventricular Myocyte Models Identify Targets Exhibiting Favorable Rate Dependence

    PubMed Central

    Bugana, Marco; Severi, Stefano; Sobie, Eric A.

    2014-01-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration

  14. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    PubMed

    Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A

    2014-03-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration

  15. Using cloud models of heartbeats as the entity identifier to secure mobile devices.

    PubMed

    Fu, Donglai; Liu, Yanhua

    2017-01-01

    Mobile devices are extensively used to store more private and often sensitive information. Therefore, it is important to protect them against unauthorised access. Authentication ensures that authorised users can use mobile devices. However, traditional authentication methods, such as numerical or graphic passwords, are vulnerable to passive attacks. For example, an adversary can steal the password by snooping from a shorter distance. To avoid these problems, this study presents a biometric approach that uses cloud models of heartbeats as the entity identifier to secure mobile devices. Here, it is identified that these concepts including cloud model or cloud have nothing to do with cloud computing. The cloud model appearing in the study is the cognitive model. In the proposed method, heartbeats are collected by two ECG electrodes that are connected to one mobile device. The backward normal cloud generator is used to generate ECG standard cloud models characterising the heartbeat template. When a user tries to have access to their mobile device, cloud models regenerated by fresh heartbeats will be compared with ECG standard cloud models to determine if the current user can use this mobile device. This authentication method was evaluated from three aspects including accuracy, authentication time and energy consumption. The proposed method gives 86.04% of true acceptance rate with 2.73% of false acceptance rate. One authentication can be done in 6s, and this processing consumes about 2000 mW of power.

  16. Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students

    ERIC Educational Resources Information Center

    Nugent, Rebecca; Ayers, Elizabeth; Dean, Nema

    2009-01-01

    In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach:…

  17. Low Mach number fluctuating hydrodynamics for electrolytes

    DOE PAGES

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; ...

    2016-11-18

    Here, we formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are also interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids (A. Donev, et al., Physics of Fluids, 27, 3, 2015), we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the massmore » and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. Furthermore, we demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second-order in the deterministic setting, and for length scales much greater than the Debye length gives results consistent with an electroneutral/ambipolar approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.« less

  18. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  19. Using SMAP to identify structural errors in hydrologic models

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Reichle, R. H.; Chen, F.; Xia, Y.; Liu, Q.

    2017-12-01

    Despite decades of effort, and the development of progressively more complex models, there continues to be underlying uncertainty regarding the representation of basic water and energy balance processes in land surface models. Soil moisture occupies a central conceptual position between atmosphere forcing of the land surface and resulting surface water fluxes. As such, direct observations of soil moisture are potentially of great value for identifying and correcting fundamental structural problems affecting these models. However, to date, this potential has not yet been realized using satellite-based retrieval products. Using soil moisture data sets produced by the NASA Soil Moisture Active/Passive mission, this presentation will explore the use of the remotely-sensed soil moisture data products as a constraint to reject certain types of surface runoff parameterizations within a land surface model. Results will demonstrate that the precision of the SMAP Level 4 Surface and Root-Zone soil moisture product allows for the robust sampling of correlation statistics describing the true strength of the relationship between pre-storm soil moisture and subsequent storm-scale runoff efficiency (i.e., total storm flow divided by total rainfall both in units of depth). For a set of 16 basins located in the South-Central United States, we will use these sampled correlations to demonstrate that so-called "infiltration-excess" runoff parameterizations under predict the importance of pre-storm soil moisture for determining storm-scale runoff efficiency. To conclude, we will discuss prospects for leveraging this insight to improve short-term hydrologic forecasting and additional avenues for SMAP soil moisture products to provide process-level insight for hydrologic modelers.

  20. A Fuzzy Computing Model for Identifying Polarity of Chinese Sentiment Words

    PubMed Central

    Huang, Yongfeng; Wu, Xian; Li, Xing

    2015-01-01

    With the spurt of online user-generated contents on web, sentiment analysis has become a very active research issue in data mining and natural language processing. As the most important indicator of sentiment, sentiment words which convey positive and negative polarity are quite instrumental for sentiment analysis. However, most of the existing methods for identifying polarity of sentiment words only consider the positive and negative polarity by the Cantor set, and no attention is paid to the fuzziness of the polarity intensity of sentiment words. In order to improve the performance, we propose a fuzzy computing model to identify the polarity of Chinese sentiment words in this paper. There are three major contributions in this paper. Firstly, we propose a method to compute polarity intensity of sentiment morphemes and sentiment words. Secondly, we construct a fuzzy sentiment classifier and propose two different methods to compute the parameter of the fuzzy classifier. Thirdly, we conduct extensive experiments on four sentiment words datasets and three review datasets, and the experimental results indicate that our model performs better than the state-of-the-art methods. PMID:26106409

  1. Development of Novel Antibiotic Lysocin E Identified by Silkworm Infection Model.

    PubMed

    Hamamoto, Hiroshi; Sekimizu, Kazuhisa

    2017-01-01

    In this symposium, we reported the identification and mechanistic analysis of a novel antibiotic named lysocin E. Lysocin E was identified by screening for therapeutic effectiveness in a silkworm Staphylococcus aureus infection model. The advantages of the silkworm infection model for screening and purification of antibiotics from the culture supernatant of soil bacteria are: 1) low cost; 2) no ethical issues; 3) convenient for evaluation of the therapeutic effectiveness of antibiotics; and 4) pharmacokinetics similar to those of mammals. Lysocin E has remarkable features compared with known antibiotics such as a novel mechanism of action and target. Here, we summarize our reports presented in this symposium.

  2. Global Sensitivity Analysis of OnGuard Models Identifies Key Hubs for Transport Interaction in Stomatal Dynamics1[CC-BY

    PubMed Central

    Vialet-Chabrand, Silvere; Griffiths, Howard

    2017-01-01

    The physical requirement for charge to balance across biological membranes means that the transmembrane transport of each ionic species is interrelated, and manipulating solute flux through any one transporter will affect other transporters at the same membrane, often with unforeseen consequences. The OnGuard systems modeling platform has helped to resolve the mechanics of stomatal movements, uncovering previously unexpected behaviors of stomata. To date, however, the manual approach to exploring model parameter space has captured little formal information about the emergent connections between parameters that define the most interesting properties of the system as a whole. Here, we introduce global sensitivity analysis to identify interacting parameters affecting a number of outputs commonly accessed in experiments in Arabidopsis (Arabidopsis thaliana). The analysis highlights synergies between transporters affecting the balance between Ca2+ sequestration and Ca2+ release pathways, notably those associated with internal Ca2+ stores and their turnover. Other, unexpected synergies appear, including with the plasma membrane anion channels and H+-ATPase and with the tonoplast TPK K+ channel. These emergent synergies, and the core hubs of interaction that they define, identify subsets of transporters associated with free cytosolic Ca2+ concentration that represent key targets to enhance plant performance in the future. They also highlight the importance of interactions between the voltage regulation of the plasma membrane and tonoplast in coordinating transport between the different cellular compartments. PMID:28432256

  3. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  4. A New Scheme to Characterize and Identify Protein Ubiquitination Sites.

    PubMed

    Nguyen, Van-Nui; Huang, Kai-Yao; Huang, Chien-Hsun; Lai, K Robert; Lee, Tzong-Yi

    2017-01-01

    Protein ubiquitination, involving the conjugation of ubiquitin on lysine residue, serves as an important modulator of many cellular functions in eukaryotes. Recent advancements in proteomic technology have stimulated increasing interest in identifying ubiquitination sites. However, most computational tools for predicting ubiquitination sites are focused on small-scale data. With an increasing number of experimentally verified ubiquitination sites, we were motivated to design a predictive model for identifying lysine ubiquitination sites for large-scale proteome dataset. This work assessed not only single features, such as amino acid composition (AAC), amino acid pair composition (AAPC) and evolutionary information, but also the effectiveness of incorporating two or more features into a hybrid approach to model construction. The support vector machine (SVM) was applied to generate the prediction models for ubiquitination site identification. Evaluation by five-fold cross-validation showed that the SVM models learned from the combination of hybrid features delivered a better prediction performance. Additionally, a motif discovery tool, MDDLogo, was adopted to characterize the potential substrate motifs of ubiquitination sites. The SVM models integrating the MDDLogo-identified substrate motifs could yield an average accuracy of 68.70 percent. Furthermore, the independent testing result showed that the MDDLogo-clustered SVM models could provide a promising accuracy (78.50 percent) and perform better than other prediction tools. Two cases have demonstrated the effective prediction of ubiquitination sites with corresponding substrate motifs.

  5. Finding needles in a haystack: a methodology for identifying and sampling community-based youth smoking cessation programs.

    PubMed

    Emery, Sherry; Lee, Jungwha; Curry, Susan J; Johnson, Tim; Sporer, Amy K; Mermelstein, Robin; Flay, Brian; Warnecke, Richard

    2010-02-01

    Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the absence of a defined sample frame. We developed a two-stage sampling design, with counties as the first-stage probability sampling units. The second stage used snowball sampling to saturation, to identify individuals who administered youth smoking cessation programs across three economic sectors in each county. Multivariate analyses modeled the relationship between program screening, eligibility, and response rates and economic sector and stratification criteria. Cumulative logit models analyzed the relationship between the number of contacts in a county and the number of programs screened, eligible, or profiled in a county. The snowball process yielded 9,983 unique and traceable contacts. Urban and high-income counties yielded significantly more screened program administrators; urban counties produced significantly more eligible programs, but there was no significant association between the county characteristics and program response rate. There is a positive relationship between the number of informants initially located and the number of programs screened, eligible, and profiled in a county. Our strategy to identify youth tobacco cessation programs could be used to create a sample frame for other nonprofit organizations that are difficult to identify due to a lack of existing directories, lists, or other traditional sample frames.

  6. Random-effects meta-analysis: the number of studies matters.

    PubMed

    Guolo, Annamaria; Varin, Cristiano

    2017-06-01

    This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.

  7. Transcriptomic Analysis in a Drosophila Model Identifies Previously Implicated and Novel Pathways in the Therapeutic Mechanism in Neuropsychiatric Disorders

    PubMed Central

    Singh, Priyanka; Mohammad, Farhan; Sharma, Abhay

    2011-01-01

    We have taken advantage of a newly described Drosophila model to gain insights into the potential mechanism of antiepileptic drugs (AEDs), a group of drugs that are widely used in the treatment of several neurological and psychiatric conditions besides epilepsy. In the recently described Drosophila model that is inspired by pentylenetetrazole (PTZ) induced kindling epileptogenesis in rodents, chronic PTZ treatment for 7 days causes a decreased climbing speed and an altered CNS transcriptome, with the latter mimicking gene expression alterations reported in epileptogenesis. In the model, an increased climbing speed is further observed 7 days after withdrawal from chronic PTZ. We used this post-PTZ withdrawal regime to identify potential AED mechanism. In this regime, treatment with each of the five AEDs tested, namely, ethosuximide, gabapentin, vigabatrin, sodium valproate, and levetiracetam, resulted in rescuing of the altered climbing behavior. The AEDs also normalized PTZ withdrawal induced transcriptomic perturbation in fly heads; whereas AED untreated flies showed a large number of up- and down-regulated genes which were enriched in several processes including gene expression and cell communication, the AED treated flies showed differential expression of only a small number of genes that did not enrich gene expression and cell communication processes. Gene expression and cell communication related upregulated genes in AED untreated flies overrepresented several pathways – spliceosome, RNA degradation, and ribosome in the former category, and inositol phosphate metabolism, phosphatidylinositol signaling, endocytosis, and hedgehog signaling in the latter. Transcriptome remodeling effect of AEDs was overall confirmed by microarray clustering that clearly separated the profiles of AED treated and untreated flies. Besides being consistent with previously implicated pathways, our results provide evidence for a role of other pathways in psychiatric drug

  8. A Division-Dependent Compartmental Model for Computing Cell Numbers in CFSE-based Lymphocyte Proliferation Assays

    DTIC Science & Technology

    2012-02-12

    is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data

  9. Sieve analysis using the number of infecting pathogens.

    PubMed

    Follmann, Dean; Huang, Chiung-Yu

    2017-12-14

    Assessment of vaccine efficacy as a function of the similarity of the infecting pathogen to the vaccine is an important scientific goal. Characterization of pathogen strains for which vaccine efficacy is low can increase understanding of the vaccine's mechanism of action and offer targets for vaccine improvement. Traditional sieve analysis estimates differential vaccine efficacy using a single identifiable pathogen for each subject. The similarity between this single entity and the vaccine immunogen is quantified, for example, by exact match or number of mismatched amino acids. With new technology, we can now obtain the actual count of genetically distinct pathogens that infect an individual. Let F be the number of distinct features of a species of pathogen. We assume a log-linear model for the expected number of infecting pathogens with feature "f," f=1,…,F. The model can be used directly in studies with passive surveillance of infections where the count of each type of pathogen is recorded at the end of some interval, or active surveillance where the time of infection is known. For active surveillance, we additionally assume that a proportional intensity model applies to the time of potentially infectious exposures and derive product and weighted estimating equation (WEE) estimators for the regression parameters in the log-linear model. The WEE estimator explicitly allows for waning vaccine efficacy and time-varying distributions of pathogens. We give conditions where sieve parameters have a per-exposure interpretation under passive surveillance. We evaluate the methods by simulation and analyze a phase III trial of a malaria vaccine. © 2017, The International Biometric Society.

  10. 7 CFR 46.20 - Lot numbers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...

  11. 7 CFR 46.20 - Lot numbers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...

  12. 7 CFR 46.20 - Lot numbers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...

  13. 7 CFR 46.20 - Lot numbers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...

  14. 7 CFR 46.20 - Lot numbers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...

  15. Probability modeling of the number of positive cores in a prostate cancer biopsy session, with applications.

    PubMed

    Serfling, Robert; Ogola, Gerald

    2016-02-10

    Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Identifying the metabolic differences of a fast-growth phenotype in Synechococcus UTEX 2973

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Thomas J.; Ungerer, Justin L.; Pakrasi, Himadri B.

    The photosynthetic capabilities of cyanobacteria make them interesting candidates for industrial bioproduction. One obstacle to large-scale implementation of cyanobacteria is their limited growth rates as compared to industrial mainstays. Synechococcus UTEX 2973, a strain closely related to Synechococcus PCC 7942, was recently identified as having the fastest measured growth rate among cyanobacteria. To facilitate the development of 2973 as a model organism we developed in this study the genome-scale metabolic model iSyu683. Experimental data were used to define CO 2 uptake rates as well as the biomass compositions for each strain. The inclusion of constraints based on experimental measurements ofmore » CO 2 uptake resulted in a ratio of the growth rates of Synechococcus 2973 to Synechococcus 7942 of 2.03, which nearly recapitulates the in vivo growth rate ratio of 2.13. This identified the difference in carbon uptake rate as the main factor contributing to the divergent growth rates. Additionally four SNPs were identified as possible contributors to modified kinetic parameters of metabolic enzymes and candidates for further study. As a result, comparisons against more established cyanobacterial strains identified a number of differences between the strains along with a correlation between the number of cytochrome c oxidase operons and heterotrophic or diazotrophic capabilities.« less

  17. Identifying the metabolic differences of a fast-growth phenotype in Synechococcus UTEX 2973

    DOE PAGES

    Mueller, Thomas J.; Ungerer, Justin L.; Pakrasi, Himadri B.; ...

    2017-01-31

    The photosynthetic capabilities of cyanobacteria make them interesting candidates for industrial bioproduction. One obstacle to large-scale implementation of cyanobacteria is their limited growth rates as compared to industrial mainstays. Synechococcus UTEX 2973, a strain closely related to Synechococcus PCC 7942, was recently identified as having the fastest measured growth rate among cyanobacteria. To facilitate the development of 2973 as a model organism we developed in this study the genome-scale metabolic model iSyu683. Experimental data were used to define CO 2 uptake rates as well as the biomass compositions for each strain. The inclusion of constraints based on experimental measurements ofmore » CO 2 uptake resulted in a ratio of the growth rates of Synechococcus 2973 to Synechococcus 7942 of 2.03, which nearly recapitulates the in vivo growth rate ratio of 2.13. This identified the difference in carbon uptake rate as the main factor contributing to the divergent growth rates. Additionally four SNPs were identified as possible contributors to modified kinetic parameters of metabolic enzymes and candidates for further study. As a result, comparisons against more established cyanobacterial strains identified a number of differences between the strains along with a correlation between the number of cytochrome c oxidase operons and heterotrophic or diazotrophic capabilities.« less

  18. The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model

    NASA Astrophysics Data System (ADS)

    Kohl, R.

    The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.

  19. Energy information data base: report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes eachmore » has used. (RWR)« less

  20. Investigation of Transonic Reynolds Number Scaling on a Twin-Engine Transport

    NASA Technical Reports Server (NTRS)

    Curtin, M. M.; Bogue, D. R.; Om, D.; Rivers, S. M. B.; Pendergraft, O. C., Jr.; Wahls, R. A.

    2002-01-01

    This paper discusses Reynolds number scaling for aerodynamic parameters including force and wing pressure measurements. A full-span model of the Boeing 777 configuration was tested at transonic conditions in the National Transonic Facility (NTF) at Reynolds numbers (based on mean aerodynamic chord) from 3.0 to 40.0 million. Data was obtained for a tail-off configuration both with and without wing vortex generators and flap support fairings. The effects of aeroelastics were separated from Reynolds number effects by varying total pressure and temperature independently. Data from the NTF at flight Reynolds number are compared with flight data to establish the wind tunnel/flight correlation. The importance of high Reynolds number testing and the need for developing a process for transonic Reynolds number scaling is discussed. This paper also identifies issues that need to be worked for Boeing Commercial to continue to conduct future high Reynolds number testing in the NTF.

  1. Electron heating in a Monte Carlo model of a high Mach number, supercritical, collisionless shock

    NASA Technical Reports Server (NTRS)

    Ellison, Donald C.; Jones, Frank C.

    1987-01-01

    Preliminary work in the investigation of electron injection and acceleration at parallel shocks is presented. A simple model of electron heating that is derived from a unified shock model which includes the effects of an electrostatic potential jump is described. The unified shock model provides a kinetic description of the injection and acceleration of ions and a fluid description of electron heating at high Mach number, supercritical, and parallel shocks.

  2. Application of a number-conserving boson expansion theory to Ginocchio's SO(8) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.h.; Pedrocchi, V.G.; Tamura, T.

    1986-05-01

    A boson expansion theory based on a number-conserving quasiparticle approach is applied to Ginocchio's SO(8) fermion model. Energy spectra and E2 transition rates calculated by using this new boson mapping are presented and compared against the exact fermion values. A comparison with other boson approaches is also given.

  3. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  4. Lepton-number-charged scalars and neutrino beamstrahlung

    NASA Astrophysics Data System (ADS)

    Berryman, Jeffrey M.; de Gouvêa, André; Kelly, Kevin J.; Zhang, Yue

    2018-04-01

    Experimentally, baryon number minus lepton number, B -L , appears to be a good global symmetry of nature. We explore the consequences of the existence of gauge-singlet scalar fields charged under B -L -dubbed lepton-number-charged scalars (LeNCSs)—and postulate that these couple to the standard model degrees of freedom in such a way that B -L is conserved even at the nonrenormalizable level. In this framework, neutrinos are Dirac fermions. Including only the lowest mass-dimension effective operators, some of the LeNCSs couple predominantly to neutrinos and may be produced in terrestrial neutrino experiments. We examine several existing constraints from particle physics, astrophysics, and cosmology to the existence of a LeNCS carrying B -L charge equal to two, and discuss the emission of LeNCSs via "neutrino beamstrahlung," which occurs every once in a while when neutrinos scatter off of ordinary matter. We identify regions of the parameter space where existing and future neutrino experiments, including the Deep Underground Neutrino Experiment, are at the frontier of searches for such new phenomena.

  5. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic

  6. Using hierarchical cluster models to systematically identify groups of jobs with similar occupational questionnaire response patterns to assist rule-based expert exposure assessment in population-based studies.

    PubMed

    Friesen, Melissa C; Shortreed, Susan M; Wheeler, David C; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S; Baris, Dalsu; Karagas, Margaret R; Schwenn, Molly; Johnson, Alison; Armenti, Karla R; Silverman, Debra T; Yu, Kai

    2015-05-01

    Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m(-3) respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters' homogeneity (defined as >75% with the same estimate) was examined compared

  7. Promoter-enhancer interactions identified from Hi-C data using probabilistic models and hierarchical topological domains.

    PubMed

    Ron, Gil; Globerson, Yuval; Moran, Dror; Kaplan, Tommy

    2017-12-21

    Proximity-ligation methods such as Hi-C allow us to map physical DNA-DNA interactions along the genome, and reveal its organization into topologically associating domains (TADs). As the Hi-C data accumulate, computational methods were developed for identifying domain borders in multiple cell types and organisms. Here, we present PSYCHIC, a computational approach for analyzing Hi-C data and identifying promoter-enhancer interactions. We use a unified probabilistic model to segment the genome into domains, which we then merge hierarchically and fit using a local background model, allowing us to identify over-represented DNA-DNA interactions across the genome. By analyzing the published Hi-C data sets in human and mouse, we identify hundreds of thousands of putative enhancers and their target genes, and compile an extensive genome-wide catalog of gene regulation in human and mouse. As we show, our predictions are highly enriched for ChIP-seq and DNA accessibility data, evolutionary conservation, eQTLs and other DNA-DNA interaction data.

  8. Identifying developmental vascular disruptor compounds using a predictive signature and alternative toxicity models

    EPA Science Inventory

    Identifying Developmental Vascular Disruptor Compounds Using a Predictive Signature and Alternative Toxicity Models Presenting Author: Tamara Tal Affiliation: U.S. EPA/ORD/ISTD, RTP, NC, USA Chemically induced vascular toxicity during embryonic development can result in a wide...

  9. A Statistical Test for Identifying the Number of Creep Regimes When Using the Wilshire Equations for Creep Property Predictions

    NASA Astrophysics Data System (ADS)

    Evans, Mark

    2016-12-01

    A new parametric approach, termed the Wilshire equations, offers the realistic potential of being able to accurately lift materials operating at in-service conditions from accelerated test results lasting no more than 5000 hours. The success of this approach can be attributed to a well-defined linear relationship that appears to exist between various creep properties and a log transformation of the normalized stress. However, these linear trends are subject to discontinuities, the number of which appears to differ from material to material. These discontinuities have until now been (1) treated as abrupt in nature and (2) identified by eye from an inspection of simple graphical plots of the data. This article puts forward a statistical test for determining the correct number of discontinuities present within a creep data set and a method for allowing these discontinuities to occur more gradually, so that the methodology is more in line with the accepted view as to how creep mechanisms evolve with changing test conditions. These two developments are fully illustrated using creep data sets on two steel alloys. When these new procedures are applied to these steel alloys, not only do they produce more accurate and realistic looking long-term predictions of the minimum creep rate, but they also lead to different conclusions about the mechanisms determining the rates of creep from those originally put forward by Wilshire.

  10. Old and New Magic Numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talmi, Igal

    2008-11-11

    The discovery of magic numbers led to the shell model. They indicated closure of major shells and are robust: proton magic numbers are rather independent of the occupation of neutron orbits and vice versa. Recently the magic property became less stringent and we hear a lot about the discovery of new magic numbers. These, however, indicate sub-shell closures and strongly depend on occupation numbers and hence, may be called quasi-magic numbers. Some of these have been known for many years and the mechanism for their appearance as well as disappearance, was well understood within the simple shell model. The situationmore » will be illustrated by a few examples which demonstrate the simple features of the shell model. Will this simplicity emerge from the complex computations of nuclear many-body theory?.« less

  11. The Baby TALK Model: An Innovative Approach to Identifying High-Risk Children and Families

    ERIC Educational Resources Information Center

    Villalpando, Aimee Hilado; Leow, Christine; Hornstein, John

    2012-01-01

    This research report examines the Baby TALK model, an innovative early childhood intervention approach used to identify, recruit, and serve young children who are at-risk for developmental delays, mental health needs, and/or school failure, and their families. The report begins with a description of the model. This description is followed by an…

  12. Punctuated Copy Number Evolution and Clonal Stasis in Triple-Negative Breast Cancer

    PubMed Central

    Gao, Ruli; Davis, Alexander; McDonald, Thomas O.; Sei, Emi; Shi, Xiuqing; Wang, Yong; Tsai, Pei-Ching; Casasent, Anna; Waters, Jill; Zhang, Hong; Meric-Bernstam, Funda; Michor, Franziska; Navin, Nicholas E.

    2016-01-01

    Aneuploidy is a hallmark of breast cancer; however, our knowledge of how these complex genomic rearrangements evolve during tumorigenesis is limited. In this study we developed a highly multiplexed single-nucleus-sequencing method to investigate copy number evolution in triple-negative breast cancer patients. We sequenced 1000 single cells from 12 patients and identified 1–3 major clonal subpopulations in each tumor that shared a common evolutionary lineage. We also identified a minor subpopulation of non-clonal cells that were classified as: 1) metastable, 2) pseudo-diploid, or 3) chromazemic. Phylogenetic analysis and mathematical modeling suggest that these data are unlikely to be explained by the gradual accumulation of copy number events over time. In contrast, our data challenge the paradigm of gradual evolution, showing that the majority of copy number aberrations are acquired at the earliest stages of tumor evolution, in short punctuated bursts, followed by stable clonal expansions that form the tumor mass. PMID:27526321

  13. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  14. Identifying arbitrary parameter zonation using multiple level set functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  15. Identifying arbitrary parameter zonation using multiple level set functions

    DOE PAGES

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    2018-03-14

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  16. Replicates, read numbers, and other important experimental design considerations for microbial RNA-seq identified using Bacillus thuringiensis datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Tse -Yuan; Mehlhorn, Tonia L; Pelletier, Dale A.

    RNA-seq is being used increasingly for gene expression studies and it is revolutionizing the fields of genomics and transcriptomics. However, the field of RNA-seq analysis is still evolving. Therefore, we specifically designed this study to contain large numbers of reads and four biological replicates per condition so we could alter these parameters and assess their impact on differential expression results. Bacillus thuringiensis strains ATCC10792 and CT43 were grown in two Luria broth medium lots on four dates and transcriptomics data were generated using one lane of sequence output from an Illumina HiSeq2000 instrument for each of the 32 samples, whichmore » were then analyzed using DESeq2. Genome coverages across samples ranged from 87 to 465X with medium lots and culture dates identified as major variation sources. Significantly differentially expressed genes (5% FDR, two-fold change) were detected for cultures grown using different medium lots and between different dates. The highly differentially expressed iron acquisition and metabolism genes, were a likely consequence of differing amounts of iron in the two media lots. Indeed, in this study RNA-seq was a tool for predictive biology since we hypothesized and confirmed the two LB medium lots had different iron contents (~two-fold difference). Furthermore, this study shows that the noise in data can be controlled and minimized with appropriate experimental design and by having the appropriate number of replicates and reads for the system being studied. We outline parameters for an efficient and cost effective microbial transcriptomics study.« less

  17. Replicates, read numbers, and other important experimental design considerations for microbial RNA-seq identified using Bacillus thuringiensis datasets

    DOE PAGES

    Lu, Tse -Yuan; Mehlhorn, Tonia L; Pelletier, Dale A.; ...

    2016-05-31

    RNA-seq is being used increasingly for gene expression studies and it is revolutionizing the fields of genomics and transcriptomics. However, the field of RNA-seq analysis is still evolving. Therefore, we specifically designed this study to contain large numbers of reads and four biological replicates per condition so we could alter these parameters and assess their impact on differential expression results. Bacillus thuringiensis strains ATCC10792 and CT43 were grown in two Luria broth medium lots on four dates and transcriptomics data were generated using one lane of sequence output from an Illumina HiSeq2000 instrument for each of the 32 samples, whichmore » were then analyzed using DESeq2. Genome coverages across samples ranged from 87 to 465X with medium lots and culture dates identified as major variation sources. Significantly differentially expressed genes (5% FDR, two-fold change) were detected for cultures grown using different medium lots and between different dates. The highly differentially expressed iron acquisition and metabolism genes, were a likely consequence of differing amounts of iron in the two media lots. Indeed, in this study RNA-seq was a tool for predictive biology since we hypothesized and confirmed the two LB medium lots had different iron contents (~two-fold difference). Furthermore, this study shows that the noise in data can be controlled and minimized with appropriate experimental design and by having the appropriate number of replicates and reads for the system being studied. We outline parameters for an efficient and cost effective microbial transcriptomics study.« less

  18. Replicates, Read Numbers, and Other Important Experimental Design Considerations for Microbial RNA-seq Identified Using Bacillus thuringiensis Datasets.

    PubMed

    Manga, Punita; Klingeman, Dawn M; Lu, Tse-Yuan S; Mehlhorn, Tonia L; Pelletier, Dale A; Hauser, Loren J; Wilson, Charlotte M; Brown, Steven D

    2016-01-01

    RNA-seq is being used increasingly for gene expression studies and it is revolutionizing the fields of genomics and transcriptomics. However, the field of RNA-seq analysis is still evolving. Therefore, we specifically designed this study to contain large numbers of reads and four biological replicates per condition so we could alter these parameters and assess their impact on differential expression results. Bacillus thuringiensis strains ATCC10792 and CT43 were grown in two Luria broth medium lots on four dates and transcriptomics data were generated using one lane of sequence output from an Illumina HiSeq2000 instrument for each of the 32 samples, which were then analyzed using DESeq2. Genome coverages across samples ranged from 87 to 465X with medium lots and culture dates identified as major variation sources. Significantly differentially expressed genes (5% FDR, two-fold change) were detected for cultures grown using different medium lots and between different dates. The highly differentially expressed iron acquisition and metabolism genes, were a likely consequence of differing amounts of iron in the two media lots. Indeed, in this study RNA-seq was a tool for predictive biology since we hypothesized and confirmed the two LB medium lots had different iron contents (~two-fold difference). This study shows that the noise in data can be controlled and minimized with appropriate experimental design and by having the appropriate number of replicates and reads for the system being studied. We outline parameters for an efficient and cost effective microbial transcriptomics study.

  19. Longitudinal Study of Two Irish Dairy Herds: Low Numbers of Shiga Toxin-Producing Escherichia coli O157 and O26 Super-Shedders Identified.

    PubMed

    Murphy, Brenda P; McCabe, Evonne; Murphy, Mary; Buckley, James F; Crowley, Dan; Fanning, Séamus; Duffy, Geraldine

    2016-01-01

    A 12-month longitudinal study was undertaken on two dairy herds to ascertain the Shiga-toxin producing Escherichia coli (STEC) O157 and O26 shedding status of the animals and its impact (if any) on raw milk. Cattle are a recognized reservoir for these organisms with associated public health and environmental implications. Animals shedding E. coli O157 at >10,000 CFU/g of feces have been deemed super-shedders. There is a gap in the knowledge regarding super-shedding of other STEC serogroups. A cohort of 40 lactating cows from herds previously identified as positive for STEC in a national surveillance project were sampled every second month between August, 2013 and July, 2014. Metadata on any potential super-shedders was documented including, e.g., age of the animal, number of lactations and days in lactation, nutritional condition, somatic cell count and content of protein in milk to assess if any were associated with risk factors for super-shedding. Recto-anal mucosal swabs (RAMS), raw milk, milk filters, and water samples were procured for each herd. The swabs were examined for E. coli O157 and O26 using a quantitative real time PCR method. Counts (CFU swab -1 ) were obtained from a standard calibration curve that related real-time PCR cycle threshold ( C t ) values against the initial concentration of O157 or O26 in the samples. Results from Farm A: 305 animals were analyzed; 15 E. coli O157 (5%) were recovered, 13 were denoted STEC encoding either stx1 and/or stx2 virulence genes and 5 (2%) STEC O26 were recovered. One super-shedder was identified shedding STEC O26 ( stx1 &2). Farm B: 224 animals were analyzed; eight E. coli O157 (3.5%) were recovered (seven were STEC) and 9 (4%) STEC O26 were recovered. Three super-shedders were identified, one was shedding STEC O157 ( stx2 ) and two STEC O26 ( stx2 ). Three encoded the adhering and effacement gene ( eae) and one isolate additionally encoded the haemolysin gene ( hlyA ). All four super-shedders were only super

  20. Longitudinal Study of Two Irish Dairy Herds: Low Numbers of Shiga Toxin-Producing Escherichia coli O157 and O26 Super-Shedders Identified

    PubMed Central

    Murphy, Brenda P.; McCabe, Evonne; Murphy, Mary; Buckley, James F.; Crowley, Dan; Fanning, Séamus; Duffy, Geraldine

    2016-01-01

    A 12-month longitudinal study was undertaken on two dairy herds to ascertain the Shiga-toxin producing Escherichia coli (STEC) O157 and O26 shedding status of the animals and its impact (if any) on raw milk. Cattle are a recognized reservoir for these organisms with associated public health and environmental implications. Animals shedding E. coli O157 at >10,000 CFU/g of feces have been deemed super-shedders. There is a gap in the knowledge regarding super-shedding of other STEC serogroups. A cohort of 40 lactating cows from herds previously identified as positive for STEC in a national surveillance project were sampled every second month between August, 2013 and July, 2014. Metadata on any potential super-shedders was documented including, e.g., age of the animal, number of lactations and days in lactation, nutritional condition, somatic cell count and content of protein in milk to assess if any were associated with risk factors for super-shedding. Recto-anal mucosal swabs (RAMS), raw milk, milk filters, and water samples were procured for each herd. The swabs were examined for E. coli O157 and O26 using a quantitative real time PCR method. Counts (CFU swab-1) were obtained from a standard calibration curve that related real-time PCR cycle threshold (Ct) values against the initial concentration of O157 or O26 in the samples. Results from Farm A: 305 animals were analyzed; 15 E. coli O157 (5%) were recovered, 13 were denoted STEC encoding either stx1 and/or stx2 virulence genes and 5 (2%) STEC O26 were recovered. One super-shedder was identified shedding STEC O26 (stx1&2). Farm B: 224 animals were analyzed; eight E. coli O157 (3.5%) were recovered (seven were STEC) and 9 (4%) STEC O26 were recovered. Three super-shedders were identified, one was shedding STEC O157 (stx2) and two STEC O26 (stx2). Three encoded the adhering and effacement gene (eae) and one isolate additionally encoded the haemolysin gene (hlyA). All four super-shedders were only super-shedding once

  1. 24 CFR 3280.6 - Serial number.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 5 2013-04-01 2013-04-01 false Serial number. 3280.6 Section 3280... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS General § 3280.6 Serial number. (a) A manufactured home serial number which will identify the manufacturer and the state in which the manufactured...

  2. 24 CFR 3280.6 - Serial number.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Serial number. 3280.6 Section 3280... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS General § 3280.6 Serial number. (a) A manufactured home serial number which will identify the manufacturer and the state in which the manufactured...

  3. 24 CFR 3280.6 - Serial number.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Serial number. 3280.6 Section 3280... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS General § 3280.6 Serial number. (a) A manufactured home serial number which will identify the manufacturer and the state in which the manufactured...

  4. 24 CFR 3280.6 - Serial number.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Serial number. 3280.6 Section 3280... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS General § 3280.6 Serial number. (a) A manufactured home serial number which will identify the manufacturer and the state in which the manufactured...

  5. 24 CFR 3280.6 - Serial number.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 5 2014-04-01 2014-04-01 false Serial number. 3280.6 Section 3280... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS General § 3280.6 Serial number. (a) A manufactured home serial number which will identify the manufacturer and the state in which the manufactured...

  6. Novel Modeling of Combinatorial miRNA Targeting Identifies SNP with Potential Role in Bone Density

    PubMed Central

    Coronnello, Claudia; Hartmaier, Ryan; Arora, Arshi; Huleihel, Luai; Pandit, Kusum V.; Bais, Abha S.; Butterworth, Michael; Kaminski, Naftali; Stormo, Gary D.; Oesterreich, Steffi; Benos, Panayiotis V.

    2012-01-01

    MicroRNAs (miRNAs) are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting), a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD) in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential tool for mi

  7. Identifying missing dictionary entries with frequency-conserving context models.

    PubMed

    Williams, Jake Ryland; Clark, Eric M; Bagrow, James P; Danforth, Christopher M; Dodds, Peter Sheridan

    2015-10-01

    In an effort to better understand meaning from natural language texts, we explore methods aimed at organizing lexical objects into contexts. A number of these methods for organization fall into a family defined by word ordering. Unlike demographic or spatial partitions of data, these collocation models are of special importance for their universal applicability. While we are interested here in text and have framed our treatment appropriately, our work is potentially applicable to other areas of research (e.g., speech, genomics, and mobility patterns) where one has ordered categorical data (e.g., sounds, genes, and locations). Our approach focuses on the phrase (whether word or larger) as the primary meaning-bearing lexical unit and object of study. To do so, we employ our previously developed framework for generating word-conserving phrase-frequency data. Upon training our model with the Wiktionary, an extensive, online, collaborative, and open-source dictionary that contains over 100000 phrasal definitions, we develop highly effective filters for the identification of meaningful, missing phrase entries. With our predictions we then engage the editorial community of the Wiktionary and propose short lists of potential missing entries for definition, developing a breakthrough, lexical extraction technique and expanding our knowledge of the defined English lexicon of phrases.

  8. Identifying missing dictionary entries with frequency-conserving context models

    NASA Astrophysics Data System (ADS)

    Williams, Jake Ryland; Clark, Eric M.; Bagrow, James P.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2015-10-01

    In an effort to better understand meaning from natural language texts, we explore methods aimed at organizing lexical objects into contexts. A number of these methods for organization fall into a family defined by word ordering. Unlike demographic or spatial partitions of data, these collocation models are of special importance for their universal applicability. While we are interested here in text and have framed our treatment appropriately, our work is potentially applicable to other areas of research (e.g., speech, genomics, and mobility patterns) where one has ordered categorical data (e.g., sounds, genes, and locations). Our approach focuses on the phrase (whether word or larger) as the primary meaning-bearing lexical unit and object of study. To do so, we employ our previously developed framework for generating word-conserving phrase-frequency data. Upon training our model with the Wiktionary, an extensive, online, collaborative, and open-source dictionary that contains over 100 000 phrasal definitions, we develop highly effective filters for the identification of meaningful, missing phrase entries. With our predictions we then engage the editorial community of the Wiktionary and propose short lists of potential missing entries for definition, developing a breakthrough, lexical extraction technique and expanding our knowledge of the defined English lexicon of phrases.

  9. The MV model of the color glass condensate for a finite number of sources including Coulomb interactions

    DOE PAGES

    McLerran, Larry; Skokov, Vladimir V.

    2016-09-19

    We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.

  10. Mixing in a T-shaped micromixer at moderate Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Minakov, A. V.; Rudyak, V. Ya.; Gavrilov, A. A.; Dekterev, A. A.

    2012-09-01

    In the present work, the regimes of the flow and mixing of fluids in a T-shaped micromixer in the range of the Reynolds numbers from 1 to 1000 are investigated systematically with the aid of numerical modeling. The flow and mixing regimes are shown to alter substantially with increasing Reynolds numbers. Five different flow regimes have been identified in the total. The dependencies of the friction coefficient and mixing efficiency on the Reynolds number are obtained. A sharp increase in the mixing efficiency at a flow transition from the symmetric to asymmetric steady regime is shown. On the other hand, the mixing efficiency slightly drops in the laminar-turbulent transition region. A substantial influence of the slip presence on walls on flow structure in the channel and mixing efficiency has been revealed.

  11. Progress in Flaps Down Flight Reynolds Number Testing Techniques at the NTF

    NASA Technical Reports Server (NTRS)

    Payne, Frank; Bosetti, Cris; Gatlin, Greg; Tuttle, Dave; Griffiths, Bob

    2007-01-01

    A series of NASA/Boeing cooperative low speed wind tunnel tests was conducted in the National Transonic Facility (NTF) between 2003 and 2004 using a semi-span high lift model representative of the 777-200 aircraft. The objective of this work was to develop the capability to acquire high quality, low speed (flaps down) wind tunnel data at up to flight Reynolds numbers in a facility originally optimized for high speed full span models. In the course of testing, a number of facility and procedural improvements were identified and implemented. The impact of these improvements on key testing metrics data quality, productivity, and so forth - was significant, and is discussed here, together with the relevance of these metrics as applied to cryogenic wind tunnel testing in general. Details of the improvements at the NTF are discussed in AIAA-2006-0508 (Recent Improvements in Semi-span Testing at the National Transonic Facility). The development work at the NTF culminated with validation testing of a 787-8 semi-span model at full flight Reynolds number in the first quarter of 2006.

  12. A Coarse-Grained Biophysical Model of E. coli and Its Application to Perturbation of the rRNA Operon Copy Number

    PubMed Central

    Tadmor, Arbel D.; Tlusty, Tsvi

    2008-01-01

    We propose a biophysical model of Escherichia coli that predicts growth rate and an effective cellular composition from an effective, coarse-grained representation of its genome. We assume that E. coli is in a state of balanced exponential steady-state growth, growing in a temporally and spatially constant environment, rich in resources. We apply this model to a series of past measurements, where the growth rate and rRNA-to-protein ratio have been measured for seven E. coli strains with an rRNA operon copy number ranging from one to seven (the wild-type copy number). These experiments show that growth rate markedly decreases for strains with fewer than six copies. Using the model, we were able to reproduce these measurements. We show that the model that best fits these data suggests that the volume fraction of macromolecules inside E. coli is not fixed when the rRNA operon copy number is varied. Moreover, the model predicts that increasing the copy number beyond seven results in a cytoplasm densely packed with ribosomes and proteins. Assuming that under such overcrowded conditions prolonged diffusion times tend to weaken binding affinities, the model predicts that growth rate will not increase substantially beyond the wild-type growth rate, as indicated by other experiments. Our model therefore suggests that changing the rRNA operon copy number of wild-type E. coli cells growing in a constant rich environment does not substantially increase their growth rate. Other observations regarding strains with an altered rRNA operon copy number, such as nucleoid compaction and the rRNA operon feedback response, appear to be qualitatively consistent with this model. In addition, we discuss possible design principles suggested by the model and propose further experiments to test its validity. PMID:18437222

  13. Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Larsson, Johan

    2013-01-01

    A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.

  14. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  15. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  16. Analysis of copy number variations in Holstein cows identify potential mechanisms contributing to differences in residual feed intake.

    PubMed

    Hou, Yali; Bickhart, Derek M; Chung, Hoyoung; Hutchison, Jana L; Norman, H Duane; Connor, Erin E; Liu, George E

    2012-11-01

    Genomic structural variation is an important and abundant source of genetic and phenotypic variation. In this study, we performed an initial analysis of copy number variations (CNVs) using BovineHD SNP genotyping data from 147 Holstein cows identified as having high or low feed efficiency as estimated by residual feed intake (RFI). We detected 443 candidate CNV regions (CNVRs) that represent 18.4 Mb (0.6 %) of the genome. To investigate the functional impacts of CNVs, we created two groups of 30 individual animals with extremely low or high estimated breeding values (EBVs) for RFI, and referred to these groups as low intake (LI; more efficient) or high intake (HI; less efficient), respectively. We identified 240 (~9.0 Mb) and 274 (~10.2 Mb) CNVRs from LI and HI groups, respectively. Approximately 30-40 % of the CNVRs were specific to the LI group or HI group of animals. The 240 LI CNVRs overlapped with 137 Ensembl genes. Network analyses indicated that the LI-specific genes were predominantly enriched for those functioning in the inflammatory response and immunity. By contrast, the 274 HI CNVRs contained 177 Ensembl genes. Network analyses indicated that the HI-specific genes were particularly involved in the cell cycle, and organ and bone development. These results relate CNVs to two key variables, namely immune response and organ and bone development. The data indicate that greater feed efficiency relates more closely to immune response, whereas cattle with reduced feed efficiency may have a greater capacity for organ and bone development.

  17. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  18. Zero Prandtl-number rotating magnetoconvection

    NASA Astrophysics Data System (ADS)

    Ghosh, Manojit; Pal, Pinaki

    2017-12-01

    We investigate instabilities and chaos near the onset of Rayleigh-Bénard convection of electrically conducting fluids with free-slip, perfectly electrically and thermally conducting boundary conditions in the presence of uniform rotation about the vertical axis and horizontal external magnetic field by considering zero Prandtl-number limit (Pr → 0). Direct numerical simulations (DNSs) and low-dimensional modeling of the system are done for the investigation. Values of the Chandrasekhar number (Q) and the Taylor number (Ta) are varied in the range 0 < Q, Ta ≤ 50. Depending on the values of the parameters in the chosen range and the choice of initial conditions, the onset of convection is found be either periodic or chaotic. Interestingly, it is found that chaos at the onset can occur through four different routes, namely, homoclinic, intermittent, period doubling, and quasiperiodic routes. Homoclinic and intermittent routes to chaos at the onset occur in the presence of weak magnetic field (Q < 2), while the period doubling route is observed for relatively stronger magnetic field (Q ≥ 2) for one set of initial conditions. On the other hand, the quasiperiodic route to chaos at the onset is observed for another set of initial conditions. However, the rotation rate (value of Ta) also plays an important role in determining the nature of convection at the onset. Analysis of the system simultaneously with DNSs and low-dimensional modeling helps us to clearly identify different flow regimes concentrated near the onset of convection and understand their origins. The periodic or chaotic convection at the onset is found to be connected with rich bifurcation structures involving subcritical pitchfork, imperfect pitchfork, supercritical Hopf, imperfect homoclinic gluing, and Neimark-Sacker bifurcations.

  19. Text categorization models for identifying unproven cancer treatments on the web.

    PubMed

    Aphinyanaphongs, Yin; Aliferis, Constantin

    2007-01-01

    The nature of the internet as a non-peer-reviewed (and largely unregulated) publication medium has allowed wide-spread promotion of inaccurate and unproven medical claims in unprecedented scale. Patients with conditions that are not currently fully treatable are particularly susceptible to unproven and dangerous promises about miracle treatments. In extreme cases, fatal adverse outcomes have been documented. Most commonly, the cost is financial, psychological, and delayed application of imperfect but proven scientific modalities. To help protect patients, who may be desperately ill and thus prone to exploitation, we explored the use of machine learning techniques to identify web pages that make unproven claims. This feasibility study shows that the resulting models can identify web pages that make unproven claims in a fully automatic manner, and substantially better than previous web tools and state-of-the-art search engine technology.

  20. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Number

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.

    2004-01-01

    Present simulation of turbulent flows involving shock wave/boundary layer interaction invariably overestimates heat flux by almost a factor of two. One possible reason for such a performance is a result of the fact that the turbulence models employed make use of Morkovin's hypothesis. This hypothesis is valid for non-hypersonic Mach numbers and moderate rates of heat transfer. At hypersonic Mach numbers, high rates of heat transfer exist in regions where shock wave/boundary layer interactions are important. As a result, one should not expect traditional turbulence models to yield accurate results. The goal of this investigation is to explore the role of a variable Prandtl number formulation in predicting heat flux in flows dominated by strong shock wave/boundary layer interactions. The intended applications involve external flows in the absence of combustion such as those encountered in supersonic inlets. This can be achieved by adding equations for the temperature variance and its dissipation rate. Such equations can be derived from the exact Navier-Stokes equations. Traditionally, modeled equations are based on the low speed energy equation where the pressure gradient term and the term responsible for energy dissipation are ignored. It is clear that such assumptions are not valid for hypersonic flows. The approach used here is based on the procedure used in deriving the k-zeta model, in which the exact equations that governed k, the variance of velocity, and zeta, the variance of vorticity, were derived and modeled. For the variable turbulent Prandtl number, the exact equations that govern the temperature variance and its dissipation rate are derived and modeled term by term. The resulting set of equations are free of damping and wall functions and are coordinate-system independent. Moreover, modeled correlations are tensorially consistent and invariant under Galilean transformation. The final set of equations will be given in the paper.

  1. Aerodynamic characteristics of three helicopter rotor airfoil sections at Reynolds number from model scale to full scale at Mach numbers from 0.35 to 0.90. [conducted in Langley 6 by 28 inch transonic tunnel

    NASA Technical Reports Server (NTRS)

    Noonan, K. W.; Bingham, G. J.

    1980-01-01

    An investigation was conducted in the Langely 6 by 28 inch transonic tunnel to determine the two dimensional aerodynamic characteristics of three helicopter rotor airfoils at Reynolds numbers from typical model scale to full scale at Mach numbers from about 0.35 to 0.90. The model scale Reynolds numbers ranged from about 700,00 to 1,500,000 and the full scale Reynolds numbers ranged from about 3,000,000 to 6,600,000. The airfoils tested were the NACA 0012 (0 deg Tab), the SC 1095 R8, and the SC 1095. Both the SC 1095 and the SC 1095 R8 airfoils had trailing edge tabs. The results of this investigation indicate that Reynolds number effects can be significant on the maximum normal force coefficient and all drag related parameters; namely, drag at zero normal force, maximum normal force drag ratio, and drag divergence Mach number. The increments in these parameters at a given Mach number owing to the model scale to full scale Reynolds number change are different for each of the airfoils.

  2. Identifying 'unhealthy' food advertising on television: a case study applying the UK Nutrient Profile model.

    PubMed

    Jenkin, Gabrielle; Wilson, Nick; Hermanson, Nicole

    2009-05-01

    To evaluate the feasibility of the UK Nutrient Profile (NP) model for identifying 'unhealthy' food advertisements using a case study of New Zealand television advertisements. Four weeks of weekday television from 15.30 hours to 18.30 hours was videotaped from a state-owned (free-to-air) television channel popular with children. Food advertisements were identified and their nutritional information collected in accordance with the requirements of the NP model. Nutrient information was obtained from a variety of sources including food labels, company websites and a national nutritional database. From the 60 h sample of weekday afternoon television, there were 1893 advertisements, of which 483 were for food products or retailers. After applying the NP model, 66 % of these were classified as advertising high-fat, high-salt and high-sugar (HFSS) foods; 28 % were classified as advertising non-HFSS foods; and the remaining 2 % were unclassifiable. More than half (53 %) of the HFSS food advertisements were for 'mixed meal' items promoted by major fast-food franchises. The advertising of non-HFSS food was sparse, covering a narrow range of food groups, with no advertisements for fresh fruit or vegetables. Despite the NP model having some design limitations in classifying real-world televised food advertisements, it was easily applied to this sample and could clearly identify HFSS products. Policy makers who do not wish to completely restrict food advertising to children outright should consider using this NP model for regulating food advertising.

  3. 9 CFR 590.150 - Official plant numbers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Official plant numbers. 590.150... of Service § 590.150 Official plant numbers. An official plant number shall be assigned to each plant granted inspection service. Such plant number shall be used to identify all containers of inspected...

  4. 9 CFR 590.150 - Official plant numbers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Official plant numbers. 590.150... of Service § 590.150 Official plant numbers. An official plant number shall be assigned to each plant granted inspection service. Such plant number shall be used to identify all containers of inspected...

  5. 9 CFR 590.150 - Official plant numbers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Official plant numbers. 590.150... of Service § 590.150 Official plant numbers. An official plant number shall be assigned to each plant granted inspection service. Such plant number shall be used to identify all containers of inspected...

  6. 9 CFR 590.150 - Official plant numbers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Official plant numbers. 590.150... of Service § 590.150 Official plant numbers. An official plant number shall be assigned to each plant granted inspection service. Such plant number shall be used to identify all containers of inspected...

  7. Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia

    Treesearch

    Nathan R. Beane; James S. Rentch; Thomas M. Schuler

    2013-01-01

    Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...

  8. Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis.

    PubMed

    Cecilio-Fernandes, Dario; Medema, Harro; Collares, Carlos Fernando; Schuwirth, Lambert; Cohen-Schotanus, Janke; Tio, René A

    2017-11-09

    Progress testing is an assessment tool used to periodically assess all students at the end-of-curriculum level. Because students cannot know everything, it is important that they recognize their lack of knowledge. For that reason, the formula-scoring method has usually been used. However, where partial knowledge needs to be taken into account, the number-right scoring method is used. Research comparing both methods has yielded conflicting results. As far as we know, in all these studies, Classical Test Theory or Generalizability Theory was used to analyze the data. In contrast to these studies, we will explore the use of the Rasch model to compare both methods. A 2 × 2 crossover design was used in a study where 298 students from four medical schools participated. A sample of 200 previously used questions from the progress tests was selected. The data were analyzed using the Rasch model, which provides fit parameters, reliability coefficients, and response option analysis. The fit parameters were in the optimal interval ranging from 0.50 to 1.50, and the means were around 1.00. The person and item reliability coefficients were higher in the number-right condition than in the formula-scoring condition. The response option analysis showed that the majority of dysfunctional items emerged in the formula-scoring condition. The findings of this study support the use of number-right scoring over formula scoring. Rasch model analyses showed that tests with number-right scoring have better psychometric properties than formula scoring. However, choosing the appropriate scoring method should depend not only on psychometric properties but also on self-directed test-taking strategies and metacognitive skills.

  9. Representational change and strategy use in children's number line estimation during the first years of primary school.

    PubMed

    White, Sonia L J; Szűcs, Dénes

    2012-01-04

    The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.

  10. Representational change and strategy use in children's number line estimation during the first years of primary school

    PubMed Central

    2012-01-01

    Background The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice. PMID:22217191

  11. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  12. Low Reynolds number kappa-epsilon and empirical transition models for oscillatory pipe flow and heat transfer. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bauer, Christopher

    1993-01-01

    Stirling engine heat exchangers are shell-and-tube type with oscillatory flow (zero-mean velocity) for the inner fluid. This heat transfer process involves laminar-transition turbulent flow motions under oscillatory flow conditions. A low Reynolds number kappa-epsilon model, (Lam-Bremhorst form), was utilized in the present study to simulate fluid flow and heat transfer in a circular tube. An empirical transition model was used to activate the low Reynolds number k-e model at the appropriate time within the cycle for a given axial location within the tube. The computational results were compared with experimental flow and heat transfer data for: (1) velocity profiles, (2) kinetic energy of turbulence, (3) skin friction factor, (4) temperature profiles, and (5) wall heat flux. The experimental data were obtained for flow in a tube (38 mm diameter and 60 diameter long), with the maximum Reynolds number based on velocity being Re(sub max) = 11840, a dimensionless frequency (Valensi number) of Va = 80.2, at three axial locations X/D = 16, 30 and 44. The agreement between the computations and the experiment is excellent in the laminar portion of the cycle and good in the turbulent portion. Moreover, the location of transition was predicted accurately. The Low Reynolds Number kappa-epsilon model, together with an empirical transition model, is proposed herein to generate the wall heat flux values at different operating parameters than the experimental conditions. Those computational data can be used for testing the much simpler and less accurate one dimensional models utilized in 1-D Stirling Engine design codes.

  13. Novel target for high-risk neuroblastoma identified in pre-clinical research | Center for Cancer Research

    Cancer.gov

    Pre-clinical research by investigators at the Center for Cancer Research and their colleagues have identified a number of novel epigenetic targets for high-risk neuroblastoma and validated a promising new targeted inhibitor in pre-clinical models.  Read more...

  14. Studies of the Effects of Perfluorocarbon Emulsions on Platelet Number and Function in Models of Critical Battlefield Injury

    DTIC Science & Technology

    2014-01-01

    br in og en Time Fibrinogen Assay after Intravenous  Perfluorocarbon  Infusion Oxygent Hespan Control 3. Fibrinogen measurement ...1 Award Number: W81XWH-13-1-0017 TITLE: Studies of the Effects of Perfluorocarbon Emulsions on Platelet Number and Function...of Perfluorocarbon Emulsions on Platelet 5b. GRANT NUMBER W81XWH-13-1-0017 Number and Function in Models of Critical Battlefield Injury 5c

  15. MODEL-OBSERVATION COMPARISONS OF ELECTRON NUMBER DENSITIES IN THE COMA OF 67P/CHURYUMOV–GERASIMENKO DURING 2015 JANUARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigren, E.; Edberg, N. J. T.; Eriksson, A. I.

    2016-09-01

    During 2015 January 9–11, at a heliocentric distance of ∼2.58–2.57 au, the ESA Rosetta spacecraft resided at a cometocentric distance of ∼28 km from the nucleus of comet 67P/Churyumov–Gerasimenko, sweeping the terminator at northern latitudes of 43°N–58°N. Measurements by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/Comet Pressure Sensor (ROSINA/COPS) provided neutral number densities. We have computed modeled electron number densities using the neutral number densities as input into a Field Free Chemistry Free model, assuming H{sub 2}O dominance and ion-electron pair formation by photoionization only. A good agreement (typically within 25%) is found between the modeled electron numbermore » densities and those observed from measurements by the Mutual Impedance Probe (RPC/MIP) and the Langmuir Probe (RPC/LAP), both being subsystems of the Rosetta Plasma Consortium. This indicates that ions along the nucleus-spacecraft line were strongly coupled to the neutrals, moving radially outward with about the same speed. Such a statement, we propose, can be further tested by observations of H{sub 3}O{sup +}/H{sub 2}O{sup +} number density ratios and associated comparisons with model results.« less

  16. Rheumatoid arthritis: identifying and characterising polymorphisms using rat models

    PubMed Central

    2016-01-01

    ABSTRACT Rheumatoid arthritis is a chronic inflammatory joint disorder characterised by erosive inflammation of the articular cartilage and by destruction of the synovial joints. It is regulated by both genetic and environmental factors, and, currently, there is no preventative treatment or cure for this disease. Genome-wide association studies have identified ∼100 new loci associated with rheumatoid arthritis, in addition to the already known locus within the major histocompatibility complex II region. However, together, these loci account for only a modest fraction of the genetic variance associated with this disease and very little is known about the pathogenic roles of most of the risk loci identified. Here, we discuss how rat models of rheumatoid arthritis are being used to detect quantitative trait loci that regulate different arthritic traits by genetic linkage analysis and to positionally clone the underlying causative genes using congenic strains. By isolating specific loci on a fixed genetic background, congenic strains overcome the challenges of genetic heterogeneity and environmental interactions associated with human studies. Most importantly, congenic strains allow functional experimental studies be performed to investigate the pathological consequences of natural genetic polymorphisms, as illustrated by the discovery of several major disease genes that contribute to arthritis in rats. We discuss how these advances have provided new biological insights into arthritis in humans. PMID:27736747

  17. An Australasian model license reassessment procedure for identifying potentially unsafe drivers.

    PubMed

    Fildes, Brian N; Charlton, Judith; Pronk, Nicola; Langford, Jim; Oxley, Jennie; Koppel, Sjaanie

    2008-08-01

    Most licensing jurisdictions in Australia currently employ age-based assessment programs as a means to manage older driver safety, yet available evidence suggests that these programs have no safety benefits. This paper describes a community referral-based model license re assessment procedure for identifying and assessing potentially unsafe drivers. While the model was primarily developed for assessing older driver fitness to drive, it could be applicable to other forms of driver impairment associated with increased crash risk. It includes a three-tier process of assessment, involving the use of validated and relevant assessment instruments. A case is argued that this process is a more systematic, transparent and effective process for managing older driver safety and thus more likely to be widely acceptable to the target community and licensing authorities than age-based practices.

  18. Periodic matrix population models: growth rate, basic reproduction number, and entropy.

    PubMed

    Bacaër, Nicolas

    2009-10-01

    This article considers three different aspects of periodic matrix population models. First, a formula for the sensitivity analysis of the growth rate lambda is obtained that is simpler than the one obtained by Caswell and Trevisan. Secondly, the formula for the basic reproduction number R0 in a constant environment is generalized to the case of a periodic environment. Some inequalities between lambda and R0 proved by Cushing and Zhou are also generalized to the periodic case. Finally, we add some remarks on Demetrius' notion of evolutionary entropy H and its relationship to the growth rate lambda in the periodic case.

  19. Biodiversity and Climate Modeling Workshop Series: Identifying gaps and needs for improving large-scale biodiversity models

    NASA Astrophysics Data System (ADS)

    Weiskopf, S. R.; Myers, B.; Beard, T. D.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.

    2017-12-01

    At the global scale, well-accepted global circulation models and agreed-upon scenarios for future climate from the Intergovernmental Panel on Climate Change (IPCC) are available. In contrast, biodiversity modeling at the global scale lacks analogous tools. While there is great interest in development of similar bodies and efforts for international monitoring and modelling of biodiversity at the global scale, equivalent modelling tools are in their infancy. This lack of global biodiversity models compared to the extensive array of general circulation models provides a unique opportunity to bring together climate, ecosystem, and biodiversity modeling experts to promote development of integrated approaches in modeling global biodiversity. Improved models are needed to understand how we are progressing towards the Aichi Biodiversity Targets, many of which are not on track to meet the 2020 goal, threatening global biodiversity conservation, monitoring, and sustainable use. We brought together biodiversity, climate, and remote sensing experts to try to 1) identify lessons learned from the climate community that can be used to improve global biodiversity models; 2) explore how NASA and other remote sensing products could be better integrated into global biodiversity models and 3) advance global biodiversity modeling, prediction, and forecasting to inform the Aichi Biodiversity Targets, the 2030 Sustainable Development Goals, and the Intergovernmental Platform on Biodiversity and Ecosystem Services Global Assessment of Biodiversity and Ecosystem Services. The 1st In-Person meeting focused on determining a roadmap for effective assessment of biodiversity model projections and forecasts by 2030 while integrating and assimilating remote sensing data and applying lessons learned, when appropriate, from climate modeling. Here, we present the outcomes and lessons learned from our first E-discussion and in-person meeting and discuss the next steps for future meetings.

  20. Modeling the magnetic properties of lanthanide complexes: relationship of the REC parameters with Pauling electronegativity and coordination number.

    PubMed

    Baldoví, José J; Gaita-Ariño, Alejandro; Coronado, Eugenio

    2015-07-28

    In a previous study, we introduced the Radial Effective Charge (REC) model to study the magnetic properties of lanthanide single ion magnets. Now, we perform an empirical determination of the effective charges (Zi) and radial displacements (Dr) of this model using spectroscopic data. This systematic study allows us to relate Dr and Zi with chemical factors such as the coordination number and the electronegativities of the metal and the donor atoms. This strategy is being used to drastically reduce the number of free parameters in the modeling of the magnetic and spectroscopic properties of f-element complexes.

  1. Model of areas for identifying risks influencing the compliance of technological processes and products

    NASA Astrophysics Data System (ADS)

    Misztal, A.; Belu, N.

    2016-08-01

    Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..

  2. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing

  3. MULTIVARIATE RECEPTOR MODELS AND MODEL UNCERTAINTY. (R825173)

    EPA Science Inventory

    Abstract

    Estimation of the number of major pollution sources, the source composition profiles, and the source contributions are the main interests in multivariate receptor modeling. Due to lack of identifiability of the receptor model, however, the estimation cannot be...

  4. Modeling secondary accidents identified by traffic shock waves.

    PubMed

    Junhua, Wang; Boya, Liu; Lanfang, Zhang; Ragland, David R

    2016-02-01

    The high potential for occurrence and the negative consequences of secondary accidents make them an issue of great concern affecting freeway safety. Using accident records from a three-year period together with California interstate freeway loop data, a dynamic method for more accurate classification based on the traffic shock wave detecting method was used to identify secondary accidents. Spatio-temporal gaps between the primary and secondary accident were proven be fit via a mixture of Weibull and normal distribution. A logistic regression model was developed to investigate major factors contributing to secondary accident occurrence. Traffic shock wave speed and volume at the occurrence of a primary accident were explicitly considered in the model, as a secondary accident is defined as an accident that occurs within the spatio-temporal impact scope of the primary accident. Results show that the shock waves originating in the wake of a primary accident have a more significant impact on the likelihood of a secondary accident occurrence than the effects of traffic volume. Primary accidents with long durations can significantly increase the possibility of secondary accidents. Unsafe speed and weather are other factors contributing to secondary crash occurrence. It is strongly suggested that when police or rescue personnel arrive at the scene of an accident, they should not suddenly block, decrease, or unblock the traffic flow, but instead endeavor to control traffic in a smooth and controlled manner. Also it is important to reduce accident processing time to reduce the risk of secondary accident. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Identifying areas of deforestation risk for REDD+ using a species modeling tool

    PubMed Central

    Riveros, Juan Carlos; Forrest, Jessica L

    2014-01-01

    Background To implement the REDD+ mechanism (Reducing Emissions for Deforestation and Forest Degradation, countries need to prioritize areas to combat future deforestation CO2 emissions, identify the drivers of deforestation around which to develop mitigation actions, and quantify and value carbon for financial mechanisms. Each comes with its own methodological challenges, and existing approaches and tools to do so can be costly to implement or require considerable technical knowledge and skill. Here, we present an approach utilizing a machine learning technique known as Maximum Entropy Modeling (Maxent) to identify areas at high deforestation risk in the study area in Madre de Dios, Peru under a business-as-usual scenario in which historic deforestation rates continue. We link deforestation risk area to carbon density values to estimate future carbon emissions. We quantified area deforested and carbon emissions between 2000 and 2009 as the basis of the scenario. Results We observed over 80,000 ha of forest cover lost from 2000-2009 (0.21% annual loss), representing over 39 million Mg CO2. The rate increased rapidly following the enhancement of the Inter Oceanic Highway in 2005. Accessibility and distance to previous deforestation were strong predictors of deforestation risk, while land use designation was less important. The model performed consistently well (AUC > 0.9), significantly better than random when we compared predicted deforestation risk to observed. If past deforestation rates continue, we estimate that 132,865 ha of forest could be lost by the year 2020, representing over 55 million Mg CO2. Conclusions Maxent provided a reliable method for identifying areas at high risk of deforestation and the major explanatory variables that could draw attention for mitigation action planning under REDD+. The tool is accessible, replicable and easy to use; all necessary for producing good risk estimates and adapt models after potential landscape change. We

  6. Identifying areas of deforestation risk for REDD+ using a species modeling tool.

    PubMed

    Aguilar-Amuchastegui, Naikoa; Riveros, Juan Carlos; Forrest, Jessica L

    2014-01-01

    To implement the REDD+ mechanism (Reducing Emissions for Deforestation and Forest Degradation, countries need to prioritize areas to combat future deforestation CO2 emissions, identify the drivers of deforestation around which to develop mitigation actions, and quantify and value carbon for financial mechanisms. Each comes with its own methodological challenges, and existing approaches and tools to do so can be costly to implement or require considerable technical knowledge and skill. Here, we present an approach utilizing a machine learning technique known as Maximum Entropy Modeling (Maxent) to identify areas at high deforestation risk in the study area in Madre de Dios, Peru under a business-as-usual scenario in which historic deforestation rates continue. We link deforestation risk area to carbon density values to estimate future carbon emissions. We quantified area deforested and carbon emissions between 2000 and 2009 as the basis of the scenario. We observed over 80,000 ha of forest cover lost from 2000-2009 (0.21% annual loss), representing over 39 million Mg CO2. The rate increased rapidly following the enhancement of the Inter Oceanic Highway in 2005. Accessibility and distance to previous deforestation were strong predictors of deforestation risk, while land use designation was less important. The model performed consistently well (AUC > 0.9), significantly better than random when we compared predicted deforestation risk to observed. If past deforestation rates continue, we estimate that 132,865 ha of forest could be lost by the year 2020, representing over 55 million Mg CO2. Maxent provided a reliable method for identifying areas at high risk of deforestation and the major explanatory variables that could draw attention for mitigation action planning under REDD+. The tool is accessible, replicable and easy to use; all necessary for producing good risk estimates and adapt models after potential landscape change. We propose this approach for developing

  7. Identifying Heat Waves in Florida: Considerations of Missing Weather Data.

    PubMed

    Leary, Emily; Young, Linda J; DuClos, Chris; Jordan, Melissa M

    2015-01-01

    Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised.

  8. Lepton-number-charged scalars and neutrino beamstrahlung

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, Jeffrey M.; de Gouvea, Andre; Kelly, Kevin J.

    Experimentally, baryon number minus lepton number, $B-L$, appears to be a good global symmetry of nature. We explore the consequences of the existence of gauge-singlet scalar fields charged under $B-L$ $-$dubbed lepton-number-charged scalars, LeNCS $-$and postulate that these couple to the standard model degrees of freedom in such a way that $B-L$ is conserved even at the non-renormalizable level. In this framework, neutrinos are Dirac fermions. Including only the lowest mass-dimension effective operators, some of the LeNCS couple predominantly to neutrinos and may be produced in terrestrial neutrino experiments. We examine several existing constraints from particle physics, astrophysics, and cosmologymore » to the existence of a LeNCS carrying $B-L$ charge equal to two, and discuss the emission of LeNCS's via "neutrino beamstrahlung," which occurs every once in a while when neutrinos scatter off of ordinary matter. In conclusion, we identify regions of the parameter space where existing and future neutrino experiments, including the Deep Underground Neutrino Experiment, are at the frontier of searches for such new phenomena.« less

  9. Lepton-number-charged scalars and neutrino beamstrahlung

    DOE PAGES

    Berryman, Jeffrey M.; de Gouvea, Andre; Kelly, Kevin J.; ...

    2018-04-23

    Experimentally, baryon number minus lepton number, $B-L$, appears to be a good global symmetry of nature. We explore the consequences of the existence of gauge-singlet scalar fields charged under $B-L$ $-$dubbed lepton-number-charged scalars, LeNCS $-$and postulate that these couple to the standard model degrees of freedom in such a way that $B-L$ is conserved even at the non-renormalizable level. In this framework, neutrinos are Dirac fermions. Including only the lowest mass-dimension effective operators, some of the LeNCS couple predominantly to neutrinos and may be produced in terrestrial neutrino experiments. We examine several existing constraints from particle physics, astrophysics, and cosmologymore » to the existence of a LeNCS carrying $B-L$ charge equal to two, and discuss the emission of LeNCS's via "neutrino beamstrahlung," which occurs every once in a while when neutrinos scatter off of ordinary matter. In conclusion, we identify regions of the parameter space where existing and future neutrino experiments, including the Deep Underground Neutrino Experiment, are at the frontier of searches for such new phenomena.« less

  10. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  11. Identifying student mental models from their response pattern to a physics multiple-choice test

    NASA Astrophysics Data System (ADS)

    Montenegro Maggio, Maximiliano Jose

    Previous work has shown that students present different misconceptions across different but similar physical situations, but the cause of these differences is still not clear. In this study, a novel analysis method was introduced to help to gain a better understanding of how different physical situations affect students' responses and learning. This novel analysis groups students into mental model groups (MMG) by similarities in their responses to multiple-choice test items, under the assumption that they have similar mental models. The Mass and Energy Conservation test was developed to probe the common misconception that objects with greater mass fall faster than objects with lesser mass across four physical situations and four knowledge sub-domains: information, dynamics, work, and energy. The test was applied before and after energy instruction to 144 college students in a large Midwestern university attending a calculus-based introductory physics course. Test time along with instruction and physical situation were the two factors. It was found that physical situation did not have a significant effect on mental models: The number of MMGs identified and the fraction of students belonging to the same MMG were not significantly different (p > .05) across physical situations. However, there was a significant effect of test time on mental models (p < .05): the fraction of students belonging to the same MMG changed from the pretest to the posttest, in that the MMG representing higher performance became predominant than the MMG with lower performance for the posttest results. A MANOVA for the average scores for each sub-domain and physical situation combination was applied to validate the previous results. It was found that a significant effect (p < .01) by physical situation resulted due to a lower average dynamics sub-domain score for the friction physical-situation attribute when compared to the no-friction physical-situation attribute. A significant effect (p < .01

  12. Retrieving infinite numbers of patterns in a spin-glass model of immune networks

    NASA Astrophysics Data System (ADS)

    Agliari, E.; Annibale, A.; Barra, A.; Coolen, A. C. C.; Tantari, D.

    2017-01-01

    The similarity between neural and (adaptive) immune networks has been known for decades, but so far we did not understand the mechanism that allows the immune system, unlike associative neural networks, to recall and execute a large number of memorized defense strategies in parallel. The explanation turns out to lie in the network topology. Neurons interact typically with a large number of other neurons, whereas interactions among lymphocytes in immune networks are very specific, and described by graphs with finite connectivity. In this paper we use replica techniques to solve a statistical mechanical immune network model with “coordinator branches” (T-cells) and “effector branches” (B-cells), and show how the finite connectivity enables the coordinators to manage an extensive number of effectors simultaneously, even above the percolation threshold (where clonal cross-talk is not negligible). A consequence of its underlying topological sparsity is that the adaptive immune system exhibits only weak ergodicity breaking, so that also spontaneous switch-like effects as bi-stabilities are present: the latter may play a significant role in the maintenance of immune homeostasis.

  13. Persistent Identifiers Implementation in EOSDIS

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K. " Rama"

    2016-01-01

    This presentation provides the motivation for and status of implementation of persistent identifiers in NASA's Earth Observation System Data and Information System (EOSDIS). The motivation is provided from the point of view of long-term preservation of datasets such that a number of questions raised by current and future users can be answered easily and precisely. A number of artifacts need to be preserved along with datasets to make this possible, especially when the authors of datasets are no longer available to address users questions. The artifacts and datasets need to be uniquely and persistently identified and linked with each other for full traceability, understandability and scientific reproducibility. Current work in the Earth Science Data and Information System (ESDIS) Project and the Distributed Active Archive Centers (DAACs) in assigning Digital Object Identifiers (DOI) is discussed as well as challenges that remain to be addressed in the future.

  14. 15 CFR 14.18 - Taxpayer identification number.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Taxpayer identification number. 14.18... COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 14.18 Taxpayer identification number. In accordance with... identifying number will be required from applicants for grants and cooperative agreements funded by the DoC...

  15. Using whole-exome sequencing to identify variants inherited from mosaic parents

    PubMed Central

    Rios, Jonathan J; Delgado, Mauricio R

    2015-01-01

    Whole-exome sequencing (WES) has allowed the discovery of genes and variants causing rare human disease. This is often achieved by comparing nonsynonymous variants between unrelated patients, and particularly for sporadic or recessive disease, often identifies a single or few candidate genes for further consideration. However, despite the potential for this approach to elucidate the genetic cause of rare human disease, a majority of patients fail to realize a genetic diagnosis using standard exome analysis methods. Although genetic heterogeneity contributes to the difficulty of exome sequence analysis between patients, it remains plausible that rare human disease is not caused by de novo or recessive variants. Multiple human disorders have been described for which the variant was inherited from a phenotypically normal mosaic parent. Here we highlight the potential for exome sequencing to identify a reasonable number of candidate genes when dominant disease variants are inherited from a mosaic parent. We show the power of WES to identify a limited number of candidate genes using this disease model and how sequence coverage affects identification of mosaic variants by WES. We propose this analysis as an alternative to discover genetic causes of rare human disorders for which typical WES approaches fail to identify likely pathogenic variants. PMID:24986828

  16. A two-component Bayesian mixture model to identify implausible gestational age.

    PubMed

    Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz

    2016-01-01

    Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.

  17. High-resolution array comparative genomic hybridization (aCGH) identifies copy number alterations in diffuse large B-cell lymphoma that predict response to immuno-chemotherapy

    PubMed Central

    Kreisel, F.; Kulkarni, S.; Kerns, R. T.; Hassan, A.; Deshmukh, H.; Nagarajan, R.; Frater, J. L.; Cashen, A.

    2013-01-01

    Despite recent attempts at sub-categorization, including gene expression profiling into prognostically different groups of “germinal center B-cell type” and “activated B-cell type”, diffuse large B-cell lymphoma (DLBCL) remains a biologically heterogenous tumor with no clear prognostic biomarkers to guide therapy. Whole genome, high resolution array comparative genomic hybridization (aCGH) was performed on 4 cases of chemoresistant DLBCL and 4 cases of chemo-responsive DLBCL to identify genetic differences which may correlate with response to R-CHOP therapy. Array CGH analysis identified 7 DNA copy number alteration (CNA) regions exclusive to the chemoresistant group, consisting of amplifications at 1p36.13, 1q42.3, 3p21.31, 7q11.23, and 16p13.3, and loss at 9p21.3, and 14p21.31. Copy number loss of the tumor suppressor genes CDKN2A (p16, p14) and CDKN2B (p15) at 9p21.3 was validated by fluorescence in situ hybridization and immunohistochemistry as independent techniques. In the chemo-sensitive group, 12 CNAs were detected consisting of segment gains on 1p36.11, 1p36.22, 2q11.2, 8q24.3, 12p13.33, and 22q13.2 and segment loss on 6p21.32. RUNX3, a tumor suppressor gene located on 1p36.11 and MTHFR, which encodes for the enzyme methylenetetrahydrofolate reductase, located on 1p36.22 are the only known genes in this group associated with lymphoma. Whole genome aCGH analysis has detected copy number alterations exclusive to either chemoresistant or chemo-responsive DLBCL that may represent consistent clonal changes predictive for prognosis and outcome of chemotherapy. PMID:21504712

  18. Identifying and addressing student difficulties with the ideal gas law

    NASA Astrophysics Data System (ADS)

    Kautz, Christian Hans

    This dissertation reports on an in-depth investigation of student understanding of the ideal gas law. The research and curriculum development were mostly conducted in the context of algebra- and calculus-based introductory physics courses and a sophomore-level thermal physics course. Research methods included individual demonstration interviews and written questions. Student difficulties with the quantities: pressure, volume, temperature, and the number of moles were identified. Data suggest that students' incorrect and incomplete microscopic models about gases contribute to the difficulties they have in answering questions posed in macroscopic terms. In addition, evidence for general reasoning difficulties is presented. These research results have guided the development of curriculum to address the student difficulties that have been identified.

  19. Effect of resonance decay on conserved number fluctuations in a hadron resonance gas model

    NASA Astrophysics Data System (ADS)

    Mishra, D. K.; Garg, P.; Netrakanti, P. K.; Mohanty, A. K.

    2016-07-01

    We study the effect of charged secondaries coming from resonance decay on the net-baryon, net-charge, and net-strangeness fluctuations in high-energy heavy-ion collisions within the hadron resonance gas (HRG) model. We emphasize the importance of including weak decays along with other resonance decays in the HRG, while comparing with the experimental observables. The effect of kinematic cuts on resonances and primordial particles on the conserved number fluctuations are also studied. The HRG model calculations with the inclusion of resonance decays and kinematical cuts are compared with the recent experimental data from STAR and PHENIX experiments. We find good agreement between our model calculations and the experimental measurements for both net-proton and net-charge distributions.

  20. Using Persuasion Models to Identify Givers.

    ERIC Educational Resources Information Center

    Ferguson, Mary Ann; And Others

    1986-01-01

    Assesses the feasibility of and suggests using W. J. McGuire's information processing theory and cognitive response analysis theory in research studies to identify "givers"--those who are likely to contribute money and resources to charities or volunteer to aid philanthropic organizations. (SRT)

  1. Identifying the necessary and sufficient number of risk factors for predicting academic failure.

    PubMed

    Lucio, Robert; Hunt, Elizabeth; Bornovalova, Marina

    2012-03-01

    Identifying the point at which individuals become at risk for academic failure (grade point average [GPA] < 2.0) involves an understanding of which and how many factors contribute to poor outcomes. School-related factors appear to be among the many factors that significantly impact academic success or failure. This study focused on 12 school-related factors. Using a thorough 5-step process, we identified which unique risk factors place one at risk for academic failure. Academic engagement, academic expectations, academic self-efficacy, homework completion, school relevance, school safety, teacher relationships (positive relationship), grade retention, school mobility, and school misbehaviors (negative relationship) were uniquely related to GPA even after controlling for all relevant covariates. Next, a receiver operating characteristic curve was used to determine a cutoff point for determining how many risk factors predict academic failure (GPA < 2.0). Results yielded a cutoff point of 2 risk factors for predicting academic failure, which provides a way for early identification of individuals who are at risk. Further implications of these findings are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  2. Cosmological baryon and lepton number in the presence of electroweak fermion-number violation

    NASA Technical Reports Server (NTRS)

    Harvey, Jeffrey A.; Turner, Michael S.

    1990-01-01

    In the presence of rapid fermion-number violation due to nonperturbative electroweak effects certain relations between the baryon number of the Universe and the lepton numbers of the Universe are predicted. In some cases the electron-neutrino asymmetry is exactly specified in terms of the baryon asymmetry. Without introducing new particles, beyond the usual quarks and leptons, it is necessary that the Universe possess a nonzero value of B - L prior to the epoch of fermion-number violation if baryon and lepton asymmetries are to survive. Contrary to intuition, even though electroweak processes violate B + L, a nonzero value of B + L persists after the epoch of rapid fermion-number violation. If the standard model is extended to include lepton-number violation, for example through Majorana neutrino masses, then electroweak processes will reduce the baryon number to zero even in the presence of an initial B - L unless 20 M(sub L) approximately greater than the square root of (T(sub B - L) m(sub P1)) where M(sub L) sets the scale of lepton number violation and T(sub B - L) is the temperature at which a B - L asymmetry is produced. In many models this implies that neutrinos must be so light that they cannot contribute appreciably to the mass density of the Universe.

  3. Predicting first-grade mathematics achievement: the contributions of domain-general cognitive abilities, nonverbal number sense, and early number competence.

    PubMed

    Hornung, Caroline; Schiltz, Christine; Brunner, Martin; Martin, Romain

    2014-01-01

    Early number competence, grounded in number-specific and domain-general cognitive abilities, is theorized to lay the foundation for later math achievement. Few longitudinal studies have tested a comprehensive model for early math development. Using structural equation modeling and mediation analyses, the present work examined the influence of kindergarteners' nonverbal number sense and domain-general abilities (i.e., working memory, fluid intelligence, and receptive vocabulary) and their early number competence (i.e., symbolic number skills) on first grade math achievement (i.e., arithmetic, shape and space skills, and number line estimation) assessed 1 year later. Latent regression models revealed that nonverbal number sense and working memory are central building blocks for developing early number competence in kindergarten and that early number competence is key for first grade math achievement. After controlling for early number competence, fluid intelligence significantly predicted arithmetic and number line estimation while receptive vocabulary significantly predicted shape and space skills. In sum we suggest that early math achievement draws on different constellations of number-specific and domain-general mechanisms.

  4. Predicting first-grade mathematics achievement: the contributions of domain-general cognitive abilities, nonverbal number sense, and early number competence

    PubMed Central

    Hornung, Caroline; Schiltz, Christine; Brunner, Martin; Martin, Romain

    2014-01-01

    Early number competence, grounded in number-specific and domain-general cognitive abilities, is theorized to lay the foundation for later math achievement. Few longitudinal studies have tested a comprehensive model for early math development. Using structural equation modeling and mediation analyses, the present work examined the influence of kindergarteners' nonverbal number sense and domain-general abilities (i.e., working memory, fluid intelligence, and receptive vocabulary) and their early number competence (i.e., symbolic number skills) on first grade math achievement (i.e., arithmetic, shape and space skills, and number line estimation) assessed 1 year later. Latent regression models revealed that nonverbal number sense and working memory are central building blocks for developing early number competence in kindergarten and that early number competence is key for first grade math achievement. After controlling for early number competence, fluid intelligence significantly predicted arithmetic and number line estimation while receptive vocabulary significantly predicted shape and space skills. In sum we suggest that early math achievement draws on different constellations of number-specific and domain-general mechanisms. PMID:24772098

  5. Persistent Identifiers for Dutch cultural heritage institutions

    NASA Astrophysics Data System (ADS)

    Ras, Marcel; Kruithof, Gijsbert

    2016-04-01

    Over the past years, more and more collections belonging to archives, libraries, media, museums, and knowledge institutes are being digitised and made available online. These are exciting times for ALM institutions. They are realising that, in the information society, their collections are goldmines. Unfortunately most heritage institutions in the Netherlands do not yet meet the basic preconditions for long-term availability of their collections. The digital objects often have no long lasting fixed reference yet. URL's and web addresses change. Some digital objects that were referenced in Europeana and other portals can no longer be found. References in scientific articles have a very short life span, which is damaging for scholarly research. In 2015, the Dutch Digital Heritage Network (NDE) has started a two-year work program to co-ordinate existing initiatives in order to improve the (long-term) accessibility of the Dutch digital heritage for a wide range of users, anytime, anyplace. The Digital Heritage Network is a partnership established on the initiative of the Ministry of Education, Culture and Science. The members of the NDE are large, national institutions that strive to professionally preserve and manage digital data, e.g. the National Library, The Netherlands Institute for Sound and Vision, the Netherlands Cultural Heritage Agency, the Royal Netherlands Academy of Arts and Sciences, the National Archive of the Netherlands and the DEN Foundation, and a growing number of associations and individuals both within and outside the heritage sector. By means of three work programmes the goals of the Network should be accomplished and improve the visibility, the usability and the sustainability of digital heritage. Each programme contains of a set of projects. Within the sustainability program a project on creating a model for persistent identifiers is taking place. The main goals of the project are (1) raise awareness among cultural heritage institutions on the

  6. Expression profiling identifies novel Hh/Gli regulated genes in developing zebrafish embryos.

    PubMed Central

    Bergeron, Sadie A.; Milla, Luis A.; Villegas, Rosario; Shen, Meng-Chieh; Burgess, Shawn M.; Allende, Miguel L.; Karlstrom, Rolf O.; Palma, Verónica

    2008-01-01

    The Hedgehog (Hh) signaling pathway plays critical instructional roles during embryonic development. Mis-regulation of Hh/Gli signaling is a major causative factor in human congenital disorders and in a variety of cancers. The zebrafish is a powerful genetic model for the study of Hh signaling during embryogenesis, as a large number of mutants have been identified affecting different components of the Hh/Gli signaling system. By performing global profiling of gene expression in different Hh/Gli gain- and loss-of-function scenarios we identified several known (e.g. ptc1 and nkx2.2a) as well as a large number of novel Hh regulated genes that are differentially expressed in embryos with altered Hh/Gli signaling function. By uncovering changes in tissue specific gene expression, we revealed new embryological processes that are influenced by Hh signaling. We thus provide a comprehensive survey of Hh/Gli regulated genes during embryogenesis and we identify new Hh-regulated genes that may be targets of mis-regulation during tumorogenesis. PMID:18055165

  7. Targetable vulnerabilities in T- and NK-cell lymphomas identified through preclinical models.

    PubMed

    Ng, Samuel Y; Yoshida, Noriaki; Christie, Amanda L; Ghandi, Mahmoud; Dharia, Neekesh V; Dempster, Joshua; Murakami, Mark; Shigemori, Kay; Morrow, Sara N; Van Scoyk, Alexandria; Cordero, Nicolas A; Stevenson, Kristen E; Puligandla, Maneka; Haas, Brian; Lo, Christopher; Meyers, Robin; Gao, Galen; Cherniack, Andrew; Louissaint, Abner; Nardi, Valentina; Thorner, Aaron R; Long, Henry; Qiu, Xintao; Morgan, Elizabeth A; Dorfman, David M; Fiore, Danilo; Jang, Julie; Epstein, Alan L; Dogan, Ahmet; Zhang, Yanming; Horwitz, Steven M; Jacobsen, Eric D; Santiago, Solimar; Ren, Jian-Guo; Guerlavais, Vincent; Annis, D Allen; Aivado, Manuel; Saleh, Mansoor N; Mehta, Amitkumar; Tsherniak, Aviad; Root, David; Vazquez, Francisca; Hahn, William C; Inghirami, Giorgio; Aster, Jon C; Weinstock, David M; Koch, Raphael

    2018-05-22

    T- and NK-cell lymphomas (TCL) are a heterogenous group of lymphoid malignancies with poor prognosis. In contrast to B-cell and myeloid malignancies, there are few preclinical models of TCLs, which has hampered the development of effective therapeutics. Here we establish and characterize preclinical models of TCL. We identify multiple vulnerabilities that are targetable with currently available agents (e.g., inhibitors of JAK2 or IKZF1) and demonstrate proof-of-principle for biomarker-driven therapies using patient-derived xenografts (PDXs). We show that MDM2 and MDMX are targetable vulnerabilities within TP53-wild-type TCLs. ALRN-6924, a stapled peptide that blocks interactions between p53 and both MDM2 and MDMX has potent in vitro activity and superior in vivo activity across 8 different PDX models compared to the standard-of-care agent romidepsin. ALRN-6924 induced a complete remission in a patient with TP53-wild-type angioimmunoblastic T-cell lymphoma, demonstrating the potential for rapid translation of discoveries from subtype-specific preclinical models.

  8. Compilation of Reprints Number 64.

    DTIC Science & Technology

    1987-11-01

    EFFECT 01 llt’ItRO(;lNEN II 1 S IN "D)" ON Till: DUlCA reprint R..VIT OF 6) D tI, PERFORMING ORG. REPORT NUMBER 7 AUTHoR(.) S CONTRACT OR GRANT NUMBER...I MENKt;, WILLI:\\M NO0() 14- 84 - C- 0218 9 PERFORMING ORGANIZATION NAME AND ADORESS 10 PROGRAM ELEMENT, PROJECT TASK AREA & WORK UNIT NUMBERS...mainly increase it. We did not identify any major effect of scatterer aspect ratio on the decay rate in the one relevant test that we performed (dikes

  9. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    ERIC Educational Resources Information Center

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  10. Constructing conceptual knowledge and promoting "number sense" from computer-managed practice in rounding whole numbers

    NASA Astrophysics Data System (ADS)

    Hativa, Nira

    1993-12-01

    This study sought to identify how high achievers learn and understand new concepts in arithmetic from computer-based practice which provides full solutions to examples but without verbal explanations. Four high-achieving second graders were observed in their natural school settings throughout all their computer-based practice sessions which involved the concept of rounding whole numbers, a concept which was totally new to them. Immediate post-session interviews inquired into students' strategies for solutions, errors, and their understanding of the underlying mathematical rules. The article describes the process through which the students construct their knowledge of the rounding concepts and the errors and misconceptions encountered in this process. The article identifies the cognitive abilities that promote student self-learning of the rounding concepts, their number concepts and "number sense." Differences in the ability to generalise, "mathematical memory," mindfulness of work and use of cognitive strategies are shown to account for the differences in patterns of, and gains in, learning and in maintaining knowledge among the students involved. Implications for the teaching of estimation concepts and of promoting students' "number sense," as well as for classroom use of computer-based practice are discussed.

  11. The Number Density of Quiescent Compact Galaxies at Intermediate Redshift

    NASA Astrophysics Data System (ADS)

    Damjanov, Ivana; Hwang, Ho Seong; Geller, Margaret J.; Chilingarian, Igor

    2014-09-01

    Massive compact systems at 0.2 < z < 0.6 are the missing link between the predominantly compact population of massive quiescent galaxies at high redshift and their analogs and relics in the local volume. The evolution in number density of these extreme objects over cosmic time is the crucial constraining factor for the models of massive galaxy assembly. We select a large sample of ~200 intermediate-redshift massive compacts from the Baryon Oscillation Spectroscopic Survey (BOSS) spectroscopy by identifying point-like Sloan Digital Sky Survey photometric sources with spectroscopic signatures of evolved redshifted galaxies. A subset of our targets have publicly available high-resolution ground-based images that we use to augment the dynamical and stellar population properties of these systems by their structural parameters. We confirm that all BOSS compact candidates are as compact as their high-redshift massive counterparts and less than half the size of similarly massive systems at z ~ 0. We use the completeness-corrected numbers of BOSS compacts to compute lower limits on their number densities in narrow redshift bins spanning the range of our sample. The abundance of extremely dense quiescent galaxies at 0.2 < z < 0.6 is in excellent agreement with the number densities of these systems at high redshift. Our lower limits support the models of massive galaxy assembly through a series of minor mergers over the redshift range 0 < z < 2.

  12. Modified screening and ranking algorithm for copy number variation detection.

    PubMed

    Xiao, Feifei; Min, Xiaoyi; Zhang, Heping

    2015-05-01

    Copy number variation (CNV) is a type of structural variation, usually defined as genomic segments that are 1 kb or larger, which present variable copy numbers when compared with a reference genome. The screening and ranking algorithm (SaRa) was recently proposed as an efficient approach for multiple change-points detection, which can be applied to CNV detection. However, some practical issues arise from application of SaRa to single nucleotide polymorphism data. In this study, we propose a modified SaRa on CNV detection to address these issues. First, we use the quantile normalization on the original intensities to guarantee that the normal mean model-based SaRa is a robust method. Second, a novel normal mixture model coupled with a modified Bayesian information criterion is proposed for candidate change-point selection and further clustering the potential CNV segments to copy number states. Simulations revealed that the modified SaRa became a robust method for identifying change-points and achieved better performance than the circular binary segmentation (CBS) method. By applying the modified SaRa to real data from the HapMap project, we illustrated its performance on detecting CNV segments. In conclusion, our modified SaRa method improves SaRa theoretically and numerically, for identifying CNVs with high-throughput genotyping data. The modSaRa package is implemented in R program and freely available at http://c2s2.yale.edu/software/modSaRa. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Using site-selection model to identify suitable sites for seagrass transplantation in the west coast of South Sulawesi

    NASA Astrophysics Data System (ADS)

    Lanuru, Mahatma; Mashoreng, S.; Amri, K.

    2018-03-01

    The success of seagrass transplantation is very much depending on the site selection and suitable transplantation methods. The main objective of this study is to develop and use a site-selection model to identify the suitability of sites for seagrass (Enhalus acoroides) transplantation. Model development was based on the physical and biological characteristics of the transplantation site. The site-selection process is divided into 3 phases: Phase I identifies potential seagrass habitat using available knowledge, removes unnecessary sites before the transplantation test is performed. Phase II involves field assessment and transplantation test of the best scoring areas identified in Phase I. Phase III is the final calculation of the TSI (Transplant Suitability Index), based on results from Phases I and II. The model was used to identify the suitability of sites for seagrass transplantation in the West coast of South Sulawesi (3 sites at Labakkang Coast, 3 sites at Awerange Bay, and 3 sites at Lale-Lae Island). Of the 9 sites, two sites were predicted by the site-selection model to be the most suitable sites for seagrass transplantation: Site II at Labakkang Coast and Site III at Lale-Lae Island.

  14. Complex architecture of primes and natural numbers.

    PubMed

    García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián

    2014-08-01

    Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.

  15. 33 CFR 62.43 - Numbers and letters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... addition to numbers to identify the first aid to navigation in a waterway, or when new aids to navigation... aid except that letters and numbers may be white. (d) Exceptions to the provisions of this section...

  16. An Ecohydraulic Model to Identify and Monitor Moapa Dace Habitat

    PubMed Central

    Hatten, James R.; Batt, Thomas R.; Scoppettone, Gary G.; Dixon, Christopher J.

    2013-01-01

    Moapa dace (Moapa coriacea) is a critically endangered thermophilic minnow native to the Muddy River ecosystem in southeastern Nevada, USA. Restricted to temperatures between 26.0 and 32.0°C, these fish are constrained to the upper two km of the Muddy River and several small tributaries fed by warm springs. Habitat alterations, nonnative species invasion, and water withdrawals during the 20th century resulted in a drastic decline in the dace population and in 1979 the Moapa Valley National Wildlife Refuge (Refuge) was created to protect them. The goal of our study was to determine the potential effects of reduced surface flows that might result from groundwater pumping or water diversions on Moapa dace habitat inside the Refuge. We accomplished our goal in several steps. First, we conducted snorkel surveys to determine the locations of Moapa dace on three warm-spring tributaries of the Muddy River. Second, we conducted hydraulic simulations over a range of flows with a two-dimensional hydrodynamic model. Third, we developed a set of Moapa dace habitat models with logistic regression and a geographic information system. Fourth, we estimated Moapa dace habitat over a range of flows (plus or minus 30% of base flow). Our spatially explicit habitat models achieved classification accuracies between 85% and 91%, depending on the snorkel survey and creek. Water depth was the most significant covariate in our models, followed by substrate, Froude number, velocity, and water temperature. Hydraulic simulations showed 2–11% gains in dace habitat when flows were increased by 30%, and 8–32% losses when flows were reduced by 30%. To ensure the health and survival of Moapa dace and the Muddy River ecosystem, groundwater and surface-water withdrawals and diversions need to be carefully monitored, while fully implementing a proactive conservation strategy. PMID:23408999

  17. An ecohydraulic model to identify and monitor moapa dace habitat

    USGS Publications Warehouse

    Hatten, James R.; Batt, Thomas R.; Scoppettone, Gayton G.; Dixon, Christopher J.

    2013-01-01

    Moapa dace (Moapa coriacea) is a critically endangered thermophilic minnow native to the Muddy River ecosystem in southeastern Nevada, USA. Restricted to temperatures between 26.0 and 32.0°C, these fish are constrained to the upper two km of the Muddy River and several small tributaries fed by warm springs. Habitat alterations, nonnative species invasion, and water withdrawals during the 20th century resulted in a drastic decline in the dace population and in 1979 the Moapa Valley National Wildlife Refuge (Refuge) was created to protect them. The goal of our study was to determine the potential effects of reduced surface flows that might result from groundwater pumping or water diversions on Moapa dace habitat inside the Refuge. We accomplished our goal in several steps. First, we conducted snorkel surveys to determine the locations of Moapa dace on three warm-spring tributaries of the Muddy River. Second, we conducted hydraulic simulations over a range of flows with a two-dimensional hydrodynamic model. Third, we developed a set of Moapa dace habitat models with logistic regression and a geographic information system. Fourth, we estimated Moapa dace habitat over a range of flows (plus or minus 30% of base flow). Our spatially explicit habitat models achieved classification accuracies between 85% and 91%, depending on the snorkel survey and creek. Water depth was the most significant covariate in our models, followed by substrate, Froude number, velocity, and water temperature. Hydraulic simulations showed 2-11% gains in dace habitat when flows were increased by 30%, and 8-32% losses when flows were reduced by 30%. To ensure the health and survival of Moapa dace and the Muddy River ecosystem, groundwater and surface-water withdrawals and diversions need to be carefully monitored, while fully implementing a proactive conservation strategy.

  18. Genomic copy number analysis of a spectrum of blue nevi identifies recurrent aberrations of entire chromosomal arms in melanoma ex blue nevus.

    PubMed

    Chan, May P; Andea, Aleodor A; Harms, Paul W; Durham, Alison B; Patel, Rajiv M; Wang, Min; Robichaud, Patrick; Fisher, Gary J; Johnson, Timothy M; Fullen, Douglas R

    2016-03-01

    Blue nevi may display significant atypia or undergo malignant transformation. Morphologic diagnosis of this spectrum of lesions is notoriously difficult, and molecular tools are increasingly used to improve diagnostic accuracy. We studied copy number aberrations in a cohort of cellular blue nevi, atypical cellular blue nevi, and melanomas ex blue nevi using Affymetrix's OncoScan platform. Cases with sufficient DNA were analyzed for GNAQ, GNA11, and HRAS mutations. Copy number aberrations were detected in 0 of 5 (0%) cellular blue nevi, 3 of 12 (25%) atypical cellular blue nevi, and 6 of 9 (67%) melanomas ex blue nevi. None of the atypical cellular blue nevi displayed more than one aberration, whereas complex aberrations involving four or more regions were seen exclusively in melanomas ex blue nevi. Gains and losses of entire chromosomal arms were identified in four of five melanomas ex blue nevi with copy number aberrations. In particular, gains of 1q, 4p, 6p, and 8q, and losses of 1p and 4q were each found in at least two melanomas. Whole chromosome aberrations were also common, and represented the sole finding in one atypical cellular blue nevus. When seen in melanomas, however, whole chromosome aberrations were invariably accompanied by partial aberrations of other chromosomes. Three melanomas ex blue nevi harbored aberrations, which were absent or negligible in their precursor components, suggesting progression in tumor biology. Gene mutations involving GNAQ and GNA11 were each detected in two of eight melanomas ex blue nevi. In conclusion, copy number aberrations are more common and often complex in melanomas ex blue nevi compared with cellular and atypical cellular blue nevi. Identification of recurrent gains and losses of entire chromosomal arms in melanomas ex blue nevi suggests that development of new probes targeting these regions may improve detection and risk stratification of these lesions.

  19. 26 CFR 301.7701-11 - Social security number.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Social security number. 301.7701-11 Section 301... ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-11 Social security number. For purposes of this chapter, the term social security number means the taxpayer identifying number of an individual or...

  20. 26 CFR 301.7701-11 - Social security number.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Social security number. 301.7701-11 Section 301... ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-11 Social security number. For purposes of this chapter, the term social security number means the taxpayer identifying number of an individual or...

  1. 26 CFR 301.7701-11 - Social security number.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 18 2014-04-01 2014-04-01 false Social security number. 301.7701-11 Section 301... ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-11 Social security number. For purposes of this chapter, the term social security number means the taxpayer identifying number of an individual or...

  2. 26 CFR 301.7701-11 - Social security number.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 18 2013-04-01 2013-04-01 false Social security number. 301.7701-11 Section 301... ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-11 Social security number. For purposes of this chapter, the term social security number means the taxpayer identifying number of an individual or...

  3. 26 CFR 301.7701-11 - Social security number.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Social security number. 301.7701-11 Section 301... ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-11 Social security number. For purposes of this chapter, the term social security number means the taxpayer identifying number of an individual or...

  4. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  5. Rapid identifying high-influence nodes in complex networks

    NASA Astrophysics Data System (ADS)

    Song, Bo; Jiang, Guo-Ping; Song, Yu-Rong; Xia, Ling-Ling

    2015-10-01

    A tiny fraction of influential individuals play a critical role in the dynamics on complex systems. Identifying the influential nodes in complex networks has theoretical and practical significance. Considering the uncertainties of network scale and topology, and the timeliness of dynamic behaviors in real networks, we propose a rapid identifying method (RIM) to find the fraction of high-influential nodes. Instead of ranking all nodes, our method only aims at ranking a small number of nodes in network. We set the high-influential nodes as initial spreaders, and evaluate the performance of RIM by the susceptible-infected-recovered (SIR) model. The simulations show that in different networks, RIM performs well on rapid identifying high-influential nodes, which is verified by typical ranking methods, such as degree, closeness, betweenness, and eigenvector centrality methods. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374180 and 61373136), the Ministry of Education Research in the Humanities and Social Sciences Planning Fund Project, China (Grant No. 12YJAZH120), and the Six Projects Sponsoring Talent Summits of Jiangsu Province, China (Grant No. RLD201212).

  6. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    NASA Astrophysics Data System (ADS)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size

  7. Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population.

    PubMed

    Iqbal, Khalid; Buijsse, Brian; Wirth, Janine; Schulze, Matthias B; Floegel, Anna; Boeing, Heiner

    2016-03-01

    Data-reduction methods such as principal component analysis are often used to derive dietary patterns. However, such methods do not assess how foods are consumed in relation to each other. Gaussian graphical models (GGMs) are a set of novel methods that can address this issue. We sought to apply GGMs to derive sex-specific dietary intake networks representing consumption patterns in a German adult population. Dietary intake data from 10,780 men and 16,340 women of the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort were cross-sectionally analyzed to construct dietary intake networks. Food intake for each participant was estimated using a 148-item food-frequency questionnaire that captured the intake of 49 food groups. GGMs were applied to log-transformed intakes (grams per day) of 49 food groups to construct sex-specific food networks. Semiparametric Gaussian copula graphical models (SGCGMs) were used to confirm GGM results. In men, GGMs identified 1 major dietary network that consisted of intakes of red meat, processed meat, cooked vegetables, sauces, potatoes, cabbage, poultry, legumes, mushrooms, soup, and whole-grain and refined breads. For women, a similar network was identified with the addition of fried potatoes. Other identified networks consisted of dairy products and sweet food groups. SGCGMs yielded results comparable to those of GGMs. GGMs are a powerful exploratory method that can be used to construct dietary networks representing dietary intake patterns that reveal how foods are consumed in relation to each other. GGMs indicated an apparent major role of red meat intake in a consumption pattern in the studied population. In the future, identified networks might be transformed into pattern scores for investigating their associations with health outcomes. © 2016 American Society for Nutrition.

  8. Efficient and robust quantum random number generation by photon number detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.

    2015-08-17

    We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less

  9. Identifying critical success factors (CSFs) of implementing building information modeling (BIM) in Malaysian construction industry

    NASA Astrophysics Data System (ADS)

    Yaakob, Mazri; Ali, Wan Nur Athirah Wan; Radzuan, Kamaruddin

    2016-08-01

    Building Information Modeling (BIM) is defined as existing from the earliest concept to demolition and it involves creating and using an intelligent 3D model to inform and communicate project decisions. This research aims to identify the critical success factors (CSFs) of BIM implementation in Malaysian construction industry. A literature review was done to explore previous BIM studies on definitions and history of BIM, construction issues, application of BIM in construction projects as well as benefits of BIM. A series of interviews with multidisciplinary Malaysian construction experts will be conducted purposely for data collection process guided by the research design and methodology approach of this study. The analysis of qualitative data from the process will be combined with criteria identified in the literature review in order to identify the CSFs. Finally, the CSFs of BIM implementation will be validated by further Malaysian industrialists during a workshop. The validated CSFs can be used as a term of reference for both Malaysian practitioners and academics towards measuring BIM effectiveness level in their organizations.

  10. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling

  11. On the origins of logarithmic number-to-position mapping.

    PubMed

    Dotan, Dror; Dehaene, Stanislas

    2016-11-01

    The number-to-position task, in which children and adults are asked to place numbers on a spatial number line, has become a classic measure of number comprehension. We present a detailed experimental and theoretical dissection of the processing stages that underlie this task. We used a continuous finger-tracking technique, which provides detailed information about the time course of processing stages. When adults map the position of 2-digit numbers onto a line, their final mapping is essentially linear, but intermediate finger location show a transient logarithmic mapping. We identify the origins of this log effect: Small numbers are processed faster than large numbers, so the finger deviates toward the target position earlier for small numbers than for large numbers. When the trajectories are aligned on the finger deviation onset, the log effect disappears. The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making. We also report cases of logarithmic mapping-by children and by a brain-injured individual-which cannot be explained by faster responding to small numbers. We show that these findings are captured by an ideal-observer model of the number-to-position mapping task, comprising 3 distinct stages: a quantification stage, whose duration is influenced by both exact and approximate representations of numerical quantity; a Bayesian accumulation-of-evidence stage, leading to a decision about the target location; and a pointing stage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Granular-flow rheology: Role of shear-rate number in transition regime

    USGS Publications Warehouse

    Chen, C.-L.; Ling, C.-H.

    1996-01-01

    This paper examines the rationale behind the semiempirical formulation of a generalized viscoplastic fluid (GVF) model in the light of the Reiner-Rivlin constitutive theory and the viscoplastic theory, thereby identifying the parameters that control the rheology of granular flow. The shear-rate number (N) proves to be among the most significant parameters identified from the GVF model. As N ??? 0 and N ??? ???, the GVF model can reduce asymptotically to the theoretical stress versus shear-rate relations in the macroviscous and graininertia regimes, respectively, where the grain concentration (C) also plays a major role in the rheology of granular flow. Using available data obtained from the rotating-cylinder experiments of neutrally buoyant solid spheres dispersing in an interstitial fluid, the shear stress for granular flow in transition between the two regimes proves dependent on N and C in addition to some material constants, such as the coefficient of restitution. The insufficiency of data on rotating-cylinder experiments cannot presently allow the GVF model to predict how a granular flow may behave in the entire range of N; however, the analyzed data provide an insight on the interrelation among the relevant dimensionless parameters.

  13. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  14. Stochastic mechanical model of vocal folds for producing jitter and for identifying pathologies through real voices.

    PubMed

    Cataldo, E; Soize, C

    2018-06-06

    Jitter, in voice production applications, is a random phenomenon characterized by the deviation of the glottal cycle length with respect to a mean value. Its study can help in identifying pathologies related to the vocal folds according to the values obtained through the different ways to measure it. This paper aims to propose a stochastic model, considering three control parameters, to generate jitter based on a deterministic one-mass model for the dynamics of the vocal folds and to identify parameters from the stochastic model taking into account real voice signals experimentally obtained. To solve the corresponding stochastic inverse problem, the cost function used is based on the distance between probability density functions of the random variables associated with the fundamental frequencies obtained by the experimental voices and the simulated ones, and also on the distance between features extracted from the voice signals, simulated and experimental, to calculate jitter. The results obtained show that the model proposed is valid and some samples of voices are synthesized considering the identified parameters for normal and pathological cases. The strategy adopted is also a novelty and mainly because a solution was obtained. In addition to the use of three parameters to construct the model of jitter, it is the discussion of a parameter related to the bandwidth of the power spectral density function of the stochastic process to measure the quality of the signal generated. A study about the influence of all the main parameters is also performed. The identification of the parameters of the model considering pathological cases is maybe of all novelties introduced by the paper the most interesting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Process-oriented modelling to identify main drivers of erosion-induced carbon fluxes

    NASA Astrophysics Data System (ADS)

    Wilken, Florian; Sommer, Michael; Van Oost, Kristof; Bens, Oliver; Fiener, Peter

    2017-05-01

    Coupled modelling of soil erosion, carbon redistribution, and turnover has received great attention over the last decades due to large uncertainties regarding erosion-induced carbon fluxes. For a process-oriented representation of event dynamics, coupled soil-carbon erosion models have been developed. However, there are currently few models that represent tillage erosion, preferential water erosion, and transport of different carbon fractions (e.g. mineral bound carbon, carbon encapsulated by soil aggregates). We couple a process-oriented multi-class sediment transport model with a carbon turnover model (MCST-C) to identify relevant redistribution processes for carbon dynamics. The model is applied for two arable catchments (3.7 and 7.8 ha) located in the Tertiary Hills about 40 km north of Munich, Germany. Our findings indicate the following: (i) redistribution by tillage has a large effect on erosion-induced vertical carbon fluxes and has a large carbon sequestration potential; (ii) water erosion has a minor effect on vertical fluxes, but episodic soil organic carbon (SOC) delivery controls the long-term erosion-induced carbon balance; (iii) delivered sediments are highly enriched in SOC compared to the parent soil, and sediment delivery is driven by event size and catchment connectivity; and (iv) soil aggregation enhances SOC deposition due to the transformation of highly mobile carbon-rich fine primary particles into rather immobile soil aggregates.

  16. Locating Fractions on a Number Line

    ERIC Educational Resources Information Center

    Wong, Monica

    2013-01-01

    Understanding fractions remains problematic for many students. The use of the number line aids in this understanding, but requires students to recognise that a fraction represents the distance from zero to a dot or arrow marked on a number line which is a linear scale. This article continues the discussion from "Identifying Fractions on a…

  17. Identifying and Evaluating the Relationships that Control a Land Surface Model's Hydrological Behavior

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Mahanama, Sarith P.

    2012-01-01

    The inherent soil moisture-evaporation relationships used in today 's land surface models (LSMs) arguably reflect a lot of guesswork given the lack of contemporaneous evaporation and soil moisture observations at the spatial scales represented by regional and global models. The inherent soil moisture-runoff relationships used in the LSMs are also of uncertain accuracy. Evaluating these relationships is difficult but crucial given that they have a major impact on how the land component contributes to hydrological and meteorological variability within the climate system. The relationships, it turns out, can be examined efficiently and effectively with a simple water balance model framework. The simple water balance model, driven with multi-decadal observations covering the conterminous United States, shows how different prescribed relationships lead to different manifestations of hydrological variability, some of which can be compared directly to observations. Through the testing of a wide suite of relationships, the simple model provides estimates for the underlying relationships that operate in nature and that should be operating in LSMs. We examine the relationships currently used in a number of different LSMs in the context of the simple water balance model results and make recommendations for potential first-order improvements to these LSMs.

  18. A variable turbulent Prandtl and Schmidt number model study for scramjet applications

    NASA Astrophysics Data System (ADS)

    Keistler, Patrick

    A turbulence model that allows for the calculation of the variable turbulent Prandtl (Prt) and Schmidt (Sct) numbers as part of the solution is presented. The model also accounts for the interactions between turbulence and chemistry by modeling the corresponding terms. Four equations are added to the baseline k-zeta turbulence model: two equations for enthalpy variance and its dissipation rate to calculate the turbulent diffusivity, and two equations for the concentrations variance and its dissipation rate to calculate the turbulent diffusion coefficient. The underlying turbulence model already accounts for compressibility effects. The variable Prt /Sct turbulence model is validated and tuned by simulating a wide variety of experiments. Included in the experiments are two-dimensional, axisymmetric, and three-dimensional mixing and combustion cases. The combustion cases involved either hydrogen and air, or hydrogen, ethylene, and air. Two chemical kinetic models are employed for each of these situations. For the hydrogen and air cases, a seven species/seven reaction model where the reaction rates are temperature dependent and a nine species/nineteen reaction model where the reaction rates are dependent on both pressure and temperature are used. For the cases involving ethylene, a 15 species/44 reaction reduced model that is both pressure and temperature dependent is used, along with a 22 species/18 global reaction reduced model that makes use of the quasi-steady-state approximation. In general, fair to good agreement is indicated for all simulated experiments. The turbulence/chemistry interaction terms are found to have a significant impact on flame location for the two-dimensional combustion case, with excellent experimental agreement when the terms are included. In most cases, the hydrogen chemical mechanisms behave nearly identically, but for one case, the pressure dependent model would not auto-ignite at the same conditions as the experiment and the other

  19. Use of artificial intelligence to identify cardiovascular compromise in a model of hemorrhagic shock.

    PubMed

    Glass, Todd F; Knapp, Jason; Amburn, Philip; Clay, Bruce A; Kabrisky, Matt; Rogers, Steven K; Garcia, Victor F

    2004-02-01

    To determine whether a prototype artificial intelligence system can identify volume of hemorrhage in a porcine model of controlled hemorrhagic shock. Prospective in vivo animal model of hemorrhagic shock. Research foundation animal surgical suite; computer laboratories of collaborating industry partner. Nineteen, juvenile, 25- to 35-kg, male and female swine. Anesthetized animals were instrumented for arterial and systemic venous pressure monitoring and blood sampling, and a splenectomy was performed. Following a 1-hr stabilization period, animals were hemorrhaged in aliquots to 10, 20, 30, 35, 40, 45, and 50% of total blood volume with a 10-min recovery between each aliquot. Data were downloaded directly from a commercial monitoring system into a proprietary PC-based software package for analysis. Arterial and venous blood gas values, glucose, and cardiac output were collected at specified intervals. Electrocardiogram, electroencephalogram, mixed venous oxygen saturation, temperature (core and blood), mean arterial pressure, pulmonary artery pressure, central venous pressure, pulse oximetry, and end-tidal CO(2) were continuously monitored and downloaded. Seventeen of 19 animals (89%) died as a direct result of hemorrhage. Stored data streams were analyzed by the prototype artificial intelligence system. For this project, the artificial intelligence system identified and compared three electrocardiographic features (R-R interval, QRS amplitude, and R-S interval) from each of nine unknown samples of the QRS complex. We found that the artificial intelligence system, trained on only three electrocardiographic features, identified hemorrhage volume with an average accuracy of 91% (95% confidence interval, 84-96%). These experiments demonstrate that an artificial intelligence system, based solely on the analysis of QRS amplitude, R-R interval, and R-S interval of an electrocardiogram, is able to accurately identify hemorrhage volume in a porcine model of lethal

  20. A critical comparison of several low Reynolds number k-epsilon turbulence models for flow over a backward facing step

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.

    1993-01-01

    Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.

  1. Principles for Public Funding of Workplace Learning. A Review To Identify Models of Workplace Learning & Funding Principles.

    ERIC Educational Resources Information Center

    Hawke, Geof; Mawer, Giselle; Connole, Helen; Solomon, Nicky

    Models of workplace learning and principles for funding workplace learning in Australia were identified through case studies and a literature review. A diverse array of workplace-based approaches to delivering nationally recognized qualifications were identified. The following were among the nine funding proposals formulated: (1) funding…

  2. An automated technique to identify potential inappropriate traditional Chinese medicine (TCM) prescriptions.

    PubMed

    Yang, Hsuan-Chia; Iqbal, Usman; Nguyen, Phung Anh; Lin, Shen-Hsien; Huang, Chih-Wei; Jian, Wen-Shan; Li, Yu-Chuan

    2016-04-01

    Medication errors such as potential inappropriate prescriptions would induce serious adverse drug events to patients. Information technology has the ability to prevent medication errors; however, the pharmacology of traditional Chinese medicine (TCM) is not as clear as in western medicine. The aim of this study was to apply the appropriateness of prescription (AOP) model to identify potential inappropriate TCM prescriptions. We used the association rule of mining techniques to analyze 14.5 million prescriptions from the Taiwan National Health Insurance Research Database. The disease and TCM (DTCM) and traditional Chinese medicine-traditional Chinese medicine (TCMM) associations are computed by their co-occurrence, and the associations' strength was measured as Q-values, which often referred to as interestingness or life values. By considering the number of Q-values, the AOP model was applied to identify the inappropriate prescriptions. Afterwards, three traditional Chinese physicians evaluated 1920 prescriptions and validated the detected outcomes from the AOP model. Out of 1920 prescriptions, 97.1% of positive predictive value and 19.5% of negative predictive value were shown by the system as compared with those by experts. The sensitivity analysis indicated that the negative predictive value could improve up to 27.5% when the model's threshold changed to 0.4. We successfully applied the AOP model to automatically identify potential inappropriate TCM prescriptions. This model could be a potential TCM clinical decision support system in order to improve drug safety and quality of care. Copyright © 2016 John Wiley & Sons, Ltd.

  3. An estimation of the number of cells in the human body.

    PubMed

    Bianconi, Eva; Piovesan, Allison; Facchin, Federica; Beraudi, Alina; Casadei, Raffaella; Frabetti, Flavia; Vitale, Lorenza; Pelleri, Maria Chiara; Tassani, Simone; Piva, Francesco; Perez-Amodio, Soledad; Strippoli, Pierluigi; Canaider, Silvia

    2013-01-01

    All living organisms are made of individual and identifiable cells, whose number, together with their size and type, ultimately defines the structure and functions of an organism. While the total cell number of lower organisms is often known, it has not yet been defined in higher organisms. In particular, the reported total cell number of a human being ranges between 10(12) and 10(16) and it is widely mentioned without a proper reference. To study and discuss the theoretical issue of the total number of cells that compose the standard human adult organism. A systematic calculation of the total cell number of the whole human body and of the single organs was carried out using bibliographical and/or mathematical approaches. A current estimation of human total cell number calculated for a variety of organs and cell types is presented. These partial data correspond to a total number of 3.72 × 10(13). Knowing the total cell number of the human body as well as of individual organs is important from a cultural, biological, medical and comparative modelling point of view. The presented cell count could be a starting point for a common effort to complete the total calculation.

  4. Trait Self-Control, Identified-Introjected Religiosity and Health-Related-Feelings in Healthy Muslims: A Structural Equation Model Analysis

    PubMed Central

    Briki, Walid; Chaouachi, Anis; Patrick, Thomas; Chamari, Karim

    2015-01-01

    Aim The present study attempted to test McCullough and Willoughby’s hypothesis that self-control mediates the relationships between religiosity and psychosocial outcomes. Specifically, this study examined whether trait self-control (TSC) mediates the relationship of identified-introjected religiosity with positive and negative health-related-feelings (HRF) in healthy Muslims. Methods Two hundred eleven French-speaking participants (116 females, 95 males; Mage = 28.15, SDage = 6.90) answered questionnaires. One hundred ninety participants were retained for the analyses because they reported to be healthy (105 females, 85 males; Mage = 27.72, SDage = 6.80). To examine the relationships between religiosity, TSC and HRF, two competing mediation models were tested using structural equation model analysis: While a starting model used TSC as mediator of the religiosity-HRF relationship, an alternative model used religiosity as mediator of the TSC-HRF relationship. Results The findings revealed that TSC mediated the relationship between identified religiosity and positive HRF, and that identified religiosity mediated the relationship between TSC and positive and negative HRF, thereby validating both models. Moreover, the comparison of both models showed that the starting model explained 13.211% of the variance (goodness of fit = 1.000), whereas the alternative model explained 6.877% of the variance (goodness of fit = 0.987). Conclusion These results show that the starting model is the most effective model to account for the relationships between religiosity, TSC, and HRF. Therefore, this study provides initial insights into how religiosity influences psychological health through TSC. Important practical implications for the religious education are suggested. PMID:25962179

  5. Trait self-control, identified-introjected religiosity and health-related-feelings in healthy muslims: a structural equation model analysis.

    PubMed

    Briki, Walid; Aloui, Asma; Bragazzi, Nicola Luigi; Chaouachi, Anis; Patrick, Thomas; Chamari, Karim

    2015-01-01

    The present study attempted to test McCullough and Willoughby's hypothesis that self-control mediates the relationships between religiosity and psychosocial outcomes. Specifically, this study examined whether trait self-control (TSC) mediates the relationship of identified-introjected religiosity with positive and negative health-related-feelings (HRF) in healthy Muslims. Two hundred eleven French-speaking participants (116 females, 95 males; Mage = 28.15, SDage = 6.90) answered questionnaires. One hundred ninety participants were retained for the analyses because they reported to be healthy (105 females, 85 males; Mage = 27.72, SDage = 6.80). To examine the relationships between religiosity, TSC and HRF, two competing mediation models were tested using structural equation model analysis: While a starting model used TSC as mediator of the religiosity-HRF relationship, an alternative model used religiosity as mediator of the TSC-HRF relationship. The findings revealed that TSC mediated the relationship between identified religiosity and positive HRF, and that identified religiosity mediated the relationship between TSC and positive and negative HRF, thereby validating both models. Moreover, the comparison of both models showed that the starting model explained 13.211% of the variance (goodness of fit = 1.000), whereas the alternative model explained 6.877% of the variance (goodness of fit = 0.987). These results show that the starting model is the most effective model to account for the relationships between religiosity, TSC, and HRF. Therefore, this study provides initial insights into how religiosity influences psychological health through TSC. Important practical implications for the religious education are suggested.

  6. Identifiability and estimation of multiple transmission pathways in cholera and waterborne disease.

    PubMed

    Eisenberg, Marisa C; Robertson, Suzanne L; Tien, Joseph H

    2013-05-07

    Cholera and many waterborne diseases exhibit multiple characteristic timescales or pathways of infection, which can be modeled as direct and indirect transmission. A major public health issue for waterborne diseases involves understanding the modes of transmission in order to improve control and prevention strategies. An important epidemiological question is: given data for an outbreak, can we determine the role and relative importance of direct vs. environmental/waterborne routes of transmission? We examine whether parameters for a differential equation model of waterborne disease transmission dynamics can be identified, both in the ideal setting of noise-free data (structural identifiability) and in the more realistic setting in the presence of noise (practical identifiability). We used a differential algebra approach together with several numerical approaches, with a particular emphasis on identifiability of the transmission rates. To examine these issues in a practical public health context, we apply the model to a recent cholera outbreak in Angola (2006). Our results show that the model parameters-including both water and person-to-person transmission routes-are globally structurally identifiable, although they become unidentifiable when the environmental transmission timescale is fast. Even for water dynamics within the identifiable range, when noisy data are considered, only a combination of the water transmission parameters can practically be estimated. This makes the waterborne transmission parameters difficult to estimate, leading to inaccurate estimates of important epidemiological parameters such as the basic reproduction number (R0). However, measurements of pathogen persistence time in environmental water sources or measurements of pathogen concentration in the water can improve model identifiability and allow for more accurate estimation of waterborne transmission pathway parameters as well as R0. Parameter estimates for the Angola outbreak suggest

  7. Identifying Candidate Chemical-Disease Linkages ...

    EPA Pesticide Factsheets

    Presentation at meeting on Environmental and Epigenetic Determinants of IBD in New York, NY on identifying candidate chemical-disease linkages by using AOPs to identify molecular initiating events and using relevant high throughput assays to screen for candidate chemicals. This hazard information is combined with exposure models to inform risk assessment. Presentation at meeting on Environmental and Epigenetic Determinants of IBD in New York, NY on identifying candidate chemical-disease linkages by using AOPs to identify molecular initiating events and using relevant high throughput assays to screen for candidate chemicals. This hazard information is combined with exposure models to inform risk assessment.

  8. Number development and developmental dyscalculia.

    PubMed

    von Aster, Michael G; Shalev, Ruth S

    2007-11-01

    There is a growing consensus that the neuropsychological underpinnings of developmental dyscalculia (DD) are a genetically determined disorder of 'number sense', a term denoting the ability to represent and manipulate numerical magnitude nonverbally on an internal number line. However, this spatially-oriented number line develops during elementary school and requires additional cognitive components including working memory and number symbolization (language). Thus, there may be children with familial-genetic DD with deficits limited to number sense and others with DD and comorbidities such as language delay, dyslexia, or attention-deficit-hyperactivity disorder. This duality is supported by epidemiological data indicating that two-thirds of children with DD have comorbid conditions while one-third have pure DD. Clinically, they differ according to their profile of arithmetic difficulties. fMRI studies indicate that parietal areas (important for number functions), and frontal regions (dominant for executive working memory and attention functions), are under-activated in children with DD. A four-step developmental model that allows prediction of different pathways for DD is presented. The core-system representation of numerical magnitude (cardinality; step 1) provides the meaning of 'number', a precondition to acquiring linguistic (step 2), and Arabic (step 3) number symbols, while a growing working memory enables neuroplastic development of an expanding mental number line during school years (step 4). Therapeutic and educational interventions can be drawn from this model.

  9. Understanding decimal numbers: a foundation for correct calculations.

    PubMed

    Pierce, Robyn U; Steinle, Vicki A; Stacey, Kaye C; Widjaja, Wanty

    2008-01-01

    This paper reports on the effectiveness of an intervention designed to improve nursing students' conceptual understanding of decimal numbers. Results of recent intervention studies have indicated some success at improving nursing students' numeracy through practice in applying procedural rules for calculation and working in real or simulated practical contexts. However, in this we identified a fundamental problem: a significant minority of students had an inadequate understanding of decimal numbers. The intervention aimed to improve nursing students' basic understanding of the size of decimal numbers, so that, firstly, calculation rules are more meaningful, and secondly, students can interpret decimal numbers (whether digital output or results of calculations) sensibly. A well-researched, time-efficient diagnostic instrument was used to identify individuals with an inadequate understanding of decimal numbers. We describe a remedial intervention that resulted in significant improvement on a delayed post-intervention test. We conclude that nurse educators should consider diagnosing and, as necessary, plan for remediation of students' foundational understanding of decimal numbers before teaching procedural rules.

  10. Fuzzy model to estimate the number of hospitalizations for asthma and pneumonia under the effects of air pollution

    PubMed Central

    Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha

    2017-01-01

    ABSTRACT OBJECTIVE Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. METHODS This is a computational model using fuzzy logic based on Mamdani’s inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. RESULTS In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. CONCLUSIONS Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. PMID:28658366

  11. Developmental Changes in the Profiles of Dyscalculia: An Explanation Based on a Double Exact-and-Approximate Number Representation Model

    PubMed Central

    Noël, Marie-Pascale; Rousselle, Laurence

    2011-01-01

    Studies on developmental dyscalculia (DD) have tried to identify a basic numerical deficit that could account for this specific learning disability. The first proposition was that the number magnitude representation of these children was impaired. However, Rousselle and Noël (2007) brought data showing that this was not the case but rather that these children were impaired when processing the magnitude of symbolic numbers only. Since then, incongruent results have been published. In this paper, we will propose a developmental perspective on this issue. We will argue that the first deficit shown in DD regards the building of an exact representation of numerical value, thanks to the learning of symbolic numbers, and that the reduced acuity of the approximate number magnitude system appears only later and is secondary to the first deficit. PMID:22203797

  12. Developmental Changes in the Profiles of Dyscalculia: An Explanation Based on a Double Exact-and-Approximate Number Representation Model.

    PubMed

    Noël, Marie-Pascale; Rousselle, Laurence

    2011-01-01

    Studies on developmental dyscalculia (DD) have tried to identify a basic numerical deficit that could account for this specific learning disability. The first proposition was that the number magnitude representation of these children was impaired. However, Rousselle and Noël (2007) brought data showing that this was not the case but rather that these children were impaired when processing the magnitude of symbolic numbers only. Since then, incongruent results have been published. In this paper, we will propose a developmental perspective on this issue. We will argue that the first deficit shown in DD regards the building of an exact representation of numerical value, thanks to the learning of symbolic numbers, and that the reduced acuity of the approximate number magnitude system appears only later and is secondary to the first deficit.

  13. Cortical areas involved in Arabic number reading.

    PubMed

    Roux, F-E; Lubrano, V; Lauwers-Cances, V; Giussani, C; Démonet, J-F

    2008-01-15

    Distinct functional pathways for processing words and numbers have been hypothesized from the observation of dissociated impairments of these categories in brain-damaged patients. We aimed to identify the cortical areas involved in Arabic number reading process in patients operated on for various brain lesions. Direct cortical electrostimulation was prospectively used in 60 brain mappings. We used object naming and two reading tasks: alphabetic script (sentences and number words) and Arabic number reading. Cortical areas involved in Arabic number reading were identified according to location, type of interference, and distinctness from areas associated with other language tasks. Arabic number reading was sustained by small cortical areas, often extremely well localized (<1 cm(2)). Over 259 language sites detected, 43 (17%) were exclusively involved in Arabic number reading (no sentence or word number reading interference detected in these sites). Specific Arabic number reading interferences were mainly found in three regions: the Broca area (Brodmann area 45), the anterior part of the dominant supramarginal gyrus (Brodmann area 40; p < 0.0001), and the temporal-basal area (Brodmann area 37; p < 0.05). Diverse types of interferences were observed (reading arrest, phonemic or semantic paraphasia). Error patterns were fairly similar across temporal, parietal, and frontal stimulation sites, except for phonemic paraphasias, which were found only in supramarginal gyrus. Our findings strongly support the fact that the acquisition through education of specific symbolic entities, such as Arabic numbers, could result in the segregation and the specialization of anatomically distinct brain areas.

  14. Hyperquarks and generation number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchmann, Alfons J.; Schmid, Michael L.

    2005-03-01

    In a model in which quarks and leptons are built up from two spin-(1/2) preons as fundamental entities, a new class of fermionic bound states (hyperquarks) arises. It turns out that these hyperquarks are necessary to fulfill the 't Hooft anomaly constraint, which then links the number of fermionic generations to the number of colors and hypercolors.

  15. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    PubMed Central

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  16. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec

    2013-01-01

    Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.

  17. Experimentally Identify the Effective Plume Chimney over a Natural Draft Chimney Model

    NASA Astrophysics Data System (ADS)

    Rahman, M. M.; Chu, C. M.; Tahir, A. M.; Ismail, M. A. bin; Misran, M. S. bin; Ling, L. S.

    2017-07-01

    The demands of energy are in increasing order due to rapid industrialization and urbanization. The researchers and scientists are working hard to improve the performance of the industry so that the energy consumption can be reduced significantly. Industries like power plant, timber processing plant, oil refinery, etc. performance mainly depend on the cooling tower chimney’s performance, either natural draft or forced draft. Chimney is used to create sufficient draft, so that air can flow through it. Cold inflow or flow reversal at chimney exit is one of the main identified problems that may alter the overall plant performance. The presence Effective Plume Chimney (EPC) is an indication of cold inflow free operation of natural draft chimney. Different mathematical model equations are used to estimate the EPC height over the heat exchanger or hot surface. In this paper, it is aim to identify the EPC experimentally. In order to do that, horizontal temperature profiling is done at the exit of the chimneys of face area 0.56m2, 1.00m2 and 2.25m2. A wire mesh screen is installed at chimneys exit to ensure cold inflow chimney operation. It is found that EPC exists in all modified chimney models and the heights of EPC varied from 1 cm to 9 cm. The mathematical models indicate that the estimated heights of EPC varied from 1 cm to 2.3 cm. Smoke test is also conducted to ensure the existence of EPC and cold inflow free option of chimney. Smoke test results confirmed the presence of EPC and cold inflow free operation of chimney. The performance of the cold inflow free chimney is increased by 50% to 90% than normal chimney.

  18. Reconsidering the safety in numbers effect for vulnerable road users: an application of agent-based modeling.

    PubMed

    Thompson, Jason; Savino, Giovanni; Stevenson, Mark

    2015-01-01

    Increasing levels of active transport provide benefits in relation to chronic disease and emissions reduction but may be associated with an increased risk of road trauma. The safety in numbers (SiN) effect is often regarded as a solution to this issue; however, the mechanisms underlying its influence are largely unknown. We aimed to (1) replicate the SiN effect within a simple, simulated environment and (2) vary bicycle density within the environment to better understand the circumstances under which SiN applies. Using an agent-based modeling approach, we constructed a virtual transport system that increased the number of bicycles from 9% to 35% of total vehicles over a period of 1,000 time units while holding the number of cars in the system constant. We then repeated this experiment under conditions of progressively decreasing bicycle density. We demonstrated that the SiN effect can be reproduced in a virtual environment, closely approximating the exponential relationships between cycling numbers and the relative risk of collision as shown in observational studies. The association, however, was highly contingent upon bicycle density. The relative risk of collisions between cars and bicycles with increasing bicycle numbers showed an association that is progressively linear at decreasing levels of density. Agent-based modeling may provide a useful tool for understanding the mechanisms underpinning the relationships previously observed between volume and risk under the assumptions of SiN. The SiN effect may apply only under circumstances in which bicycle density also increases over time. Additional mechanisms underpinning the SiN effect, independent of behavioral adjustment by drivers, are explored.

  19. 78 FR 913 - IRS Truncated Taxpayer Identification Numbers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-07

    ... taxpayer identification number, a TTIN. As an alternative to using a social security number (SSN), IRS... concerns about the risk of identity theft stemming from the inclusion of a taxpayer identifying number on a...) authorizes the Secretary to prescribe regulations with respect to the inclusion in returns, statements, or...

  20. 33 CFR 181.23 - Hull identification numbers required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Hull identification numbers... identification numbers required. (a) A manufacturer must identify each boat produced or imported with primary and secondary hull identification numbers permanently affixed in accordance with § 181.29 of this subpart. (b) A...

  1. 33 CFR 181.23 - Hull identification numbers required.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Hull identification numbers... identification numbers required. (a) A manufacturer must identify each boat produced or imported with primary and secondary hull identification numbers permanently affixed in accordance with § 181.29 of this subpart. (b) A...

  2. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  3. Symbolic Number Comparison Is Not Processed by the Analog Number System: Different Symbolic and Non-symbolic Numerical Distance and Size Effects

    PubMed Central

    Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia

    2018-01-01

    HIGHLIGHTS We test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems. Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS) working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation. PMID:29491845

  4. Identifying Likely Disk-hosting M dwarfs with Disk Detective

    NASA Astrophysics Data System (ADS)

    Silverberg, Steven; Wisniewski, John; Kuchner, Marc J.; Disk Detective Collaboration

    2018-01-01

    M dwarfs are critical targets for exoplanet searches. Debris disks often provide key information as to the formation and evolution of planetary systems around higher-mass stars, alongside the planet themselves. However, less than 300 M dwarf debris disks are known, despite M dwarfs making up 70% of the local neighborhood. The Disk Detective citizen science project has identified over 6000 new potential disk host stars from the AllWISE catalog over the past three years. Here, we present preliminary results of our search for new disk-hosting M dwarfs in the survey. Based on near-infrared color cuts and fitting stellar models to photometry, we have identified over 500 potential new M dwarf disk hosts, nearly doubling the known number of such systems. In this talk, we present our methodology, and outline our ongoing work to confirm systems as M dwarf disks.

  5. Neurobehavioral Mutants Identified in an ENU Mutagenesis Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Melloni N.; Dunning, Jonathan P; Wiley, Ronald G

    2007-01-01

    We report on a behavioral screening test battery that successfully identified several neurobehavioral mutants among a large-scale ENU-mutagenized mouse population. Large numbers of ENU mutagenized mice were screened for abnormalities in central nervous system function based on abnormal performance in a series of behavior tasks. We developed and employed a high-throughput screen of behavioral tasks to detect behavioral outliers. Twelve mutant pedigrees, representing a broad range of behavioral phenotypes, have been identified. Specifically, we have identified two open field mutants (one displaying hyper-locomotion, the other hypo-locomotion), four tail suspension mutants (all displaying increased immobility), one nociception mutant (displaying abnormal responsivenessmore » to thermal pain), two prepulse inhibition mutants (displaying poor inhibition of the startle response), one anxiety-related mutant (displaying decreased anxiety in the light/dark test), and one learning and memory mutant (displaying reduced response to the conditioned stimulus) These findings highlight the utility of a set of behavioral tasks used in a high throughput screen to identify neurobehavioral mutants. Further analysis (i.e., behavioral and genetic mapping studies) of mutants is in progress with the ultimate goal of identification of novel genes and mouse models relevant to human disorders as well as the identification of novel therapeutic targets.« less

  6. Identifiability of conservative linear mechanical systems. [applied to large flexible spacecraft structures

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1985-01-01

    With a sufficiently great number of sensors and actuators, any finite dimensional dynamic system is identifiable on the basis of input-output data. It is presently indicated that, for conservative nongyroscopic linear mechanical systems, the number of sensors and actuators required for identifiability is very large, where 'identifiability' is understood as a unique determination of the mass and stiffness matrices. The required number of sensors and actuators drops by a factor of two, given a relaxation of the identifiability criterion so that identification can fail only if the system parameters being identified lie in a set of measure zero. When the mass matrix is known a priori, this additional information does not significantly affect the requirements for guaranteed identifiability, though the number of parameters to be determined is reduced by a factor of two.

  7. Random parameter models of interstate crash frequencies by severity, number of vehicles involved, collision and location type.

    PubMed

    Venkataraman, Narayan; Ulfarsson, Gudmundur F; Shankar, Venky N

    2013-10-01

    A nine-year (1999-2007) continuous panel of crash histories on interstates in Washington State, USA, was used to estimate random parameter negative binomial (RPNB) models for various aggregations of crashes. A total of 21 different models were assessed in terms of four ways to aggregate crashes, by: (a) severity, (b) number of vehicles involved, (c) crash type, and by (d) location characteristics. The models within these aggregations include specifications for all severities (property damage only, possible injury, evident injury, disabling injury, and fatality), number of vehicles involved (one-vehicle to five-or-more-vehicle), crash type (sideswipe, same direction, overturn, head-on, fixed object, rear-end, and other), and location types (urban interchange, rural interchange, urban non-interchange, rural non-interchange). A total of 1153 directional road segments comprising of the seven Washington State interstates were analyzed, yielding statistical models of crash frequency based on 10,377 observations. These results suggest that in general there was a significant improvement in log-likelihood when using RPNB compared to a fixed parameter negative binomial baseline model. Heterogeneity effects are most noticeable for lighting type, road curvature, and traffic volume (ADT). Median lighting or right-side lighting are linked to increased crash frequencies in many models for more than half of the road segments compared to both-sides lighting. Both-sides lighting thereby appears to generally lead to a safety improvement. Traffic volume has a random parameter but the effect is always toward increasing crash frequencies as expected. However that the effect is random shows that the effect of traffic volume on crash frequency is complex and varies by road segment. The number of lanes has a random parameter effect only in the interchange type models. The results show that road segment-specific insights into crash frequency occurrence can lead to improved design policy and

  8. Using an autologistic regression model to identify spatial risk factors and spatial risk patterns of hand, foot and mouth disease (HFMD) in Mainland China

    PubMed Central

    2014-01-01

    Background There have been large-scale outbreaks of hand, foot and mouth disease (HFMD) in Mainland China over the last decade. These events varied greatly across the country. It is necessary to identify the spatial risk factors and spatial distribution patterns of HFMD for public health control and prevention. Climate risk factors associated with HFMD occurrence have been recognized. However, few studies discussed the socio-economic determinants of HFMD risk at a space scale. Methods HFMD records in Mainland China in May 2008 were collected. Both climate and socio-economic factors were selected as potential risk exposures of HFMD. Odds ratio (OR) was used to identify the spatial risk factors. A spatial autologistic regression model was employed to get OR values of each exposures and model the spatial distribution patterns of HFMD risk. Results Results showed that both climate and socio-economic variables were spatial risk factors for HFMD transmission in Mainland China. The statistically significant risk factors are monthly average precipitation (OR = 1.4354), monthly average temperature (OR = 1.379), monthly average wind speed (OR = 1.186), the number of industrial enterprises above designated size (OR = 17.699), the population density (OR = 1.953), and the proportion of student population (OR = 1.286). The spatial autologistic regression model has a good goodness of fit (ROC = 0.817) and prediction accuracy (Correct ratio = 78.45%) of HFMD occurrence. The autologistic regression model also reduces the contribution of the residual term in the ordinary logistic regression model significantly, from 17.25 to 1.25 for the odds ratio. Based on the prediction results of the spatial model, we obtained a map of the probability of HFMD occurrence that shows the spatial distribution pattern and local epidemic risk over Mainland China. Conclusions The autologistic regression model was used to identify spatial risk factors and model spatial risk patterns of HFMD. HFMD

  9. Enhanced model for determining the number of graphene layers and their distribution from X-ray diffraction data

    PubMed Central

    Ademi, Abdulakim; Grozdanov, Anita; Paunović, Perica; Dimitrov, Aleksandar T

    2015-01-01

    Summary A model consisting of an equation that includes graphene thickness distribution is used to calculate theoretical 002 X-ray diffraction (XRD) peak intensities. An analysis was performed upon graphene samples produced by two different electrochemical procedures: electrolysis in aqueous electrolyte and electrolysis in molten salts, both using a nonstationary current regime. Herein, the model is enhanced by a partitioning of the corresponding 2θ interval, resulting in significantly improved accuracy of the results. The model curves obtained exhibit excellent fitting to the XRD intensities curves of the studied graphene samples. The employed equation parameters make it possible to calculate the j-layer graphene region coverage of the graphene samples, and hence the number of graphene layers. The results of the thorough analysis are in agreement with the calculated number of graphene layers from Raman spectra C-peak position values and indicate that the graphene samples studied are few-layered. PMID:26665083

  10. Mental Imagery, Impact, and Affect: A Mediation Model for Charitable Giving

    PubMed Central

    Dickert, Stephan; Kleber, Janet; Västfjäll, Daniel; Slovic, Paul

    2016-01-01

    One of the puzzling phenomena in philanthropy is that people can show strong compassion for identified individual victims but remain unmoved by catastrophes that affect large numbers of victims. Two prominent findings in research on charitable giving reflect this idiosyncrasy: The (1) identified victim and (2) victim number effects. The first of these suggests that identifying victims increases donations and the second refers to the finding that people’s willingness to donate often decreases as the number of victims increases. While these effects have been documented in the literature, their underlying psychological processes need further study. We propose a model in which identified victim and victim number effects operate through different cognitive and affective mechanisms. In two experiments we present empirical evidence for such a model and show that different affective motivations (donor-focused vs. victim-focused feelings) are related to the cognitive processes of impact judgments and mental imagery. Moreover, we argue that different mediation pathways exist for identifiability and victim number effects. PMID:26859848

  11. Mental Imagery, Impact, and Affect: A Mediation Model for Charitable Giving.

    PubMed

    Dickert, Stephan; Kleber, Janet; Västfjäll, Daniel; Slovic, Paul

    2016-01-01

    One of the puzzling phenomena in philanthropy is that people can show strong compassion for identified individual victims but remain unmoved by catastrophes that affect large numbers of victims. Two prominent findings in research on charitable giving reflect this idiosyncrasy: The (1) identified victim and (2) victim number effects. The first of these suggests that identifying victims increases donations and the second refers to the finding that people's willingness to donate often decreases as the number of victims increases. While these effects have been documented in the literature, their underlying psychological processes need further study. We propose a model in which identified victim and victim number effects operate through different cognitive and affective mechanisms. In two experiments we present empirical evidence for such a model and show that different affective motivations (donor-focused vs. victim-focused feelings) are related to the cognitive processes of impact judgments and mental imagery. Moreover, we argue that different mediation pathways exist for identifiability and victim number effects.

  12. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  13. Magnetic field amplification by small-scale dynamo action: dependence on turbulence models and Reynolds and Prandtl numbers.

    PubMed

    Schober, Jennifer; Schleicher, Dominik; Federrath, Christoph; Klessen, Ralf; Banerjee, Robi

    2012-02-01

    The small-scale dynamo is a process by which turbulent kinetic energy is converted into magnetic energy, and thus it is expected to depend crucially on the nature of the turbulence. In this paper, we present a model for the small-scale dynamo that takes into account the slope of the turbulent velocity spectrum v(ℓ)proportional ℓ([symbol see text])V}, where ℓ and v(ℓ) are the size of a turbulent fluctuation and the typical velocity on that scale. The time evolution of the fluctuation component of the magnetic field, i.e., the small-scale field, is described by the Kazantsev equation. We solve this linear differential equation for its eigenvalues with the quantum-mechanical WKB approximation. The validity of this method is estimated as a function of the magnetic Prandtl number Pm. We calculate the minimal magnetic Reynolds number for dynamo action, Rm_{crit}, using our model of the turbulent velocity correlation function. For Kolmogorov turbulence ([symbol see text] = 1/3), we find that the critical magnetic Reynolds number is Rm(crit) (K) ≈ 110 and for Burgers turbulence ([symbol see text] = 1/2) Rm(crit)(B) ≈ 2700. Furthermore, we derive that the growth rate of the small-scale magnetic field for a general type of turbulence is Γ proportional Re((1-[symbol see text])/(1+[symbol see text])) in the limit of infinite magnetic Prandtl number. For decreasing magnetic Prandtl number (down to Pm >/~ 10), the growth rate of the small-scale dynamo decreases. The details of this drop depend on the WKB approximation, which becomes invalid for a magnetic Prandtl number of about unity.

  14. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    NASA Astrophysics Data System (ADS)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  15. Stratospheric Intrusion Catalog: A 10-year Compilation of Events Identified By Using an Objective Feature Tracking Model With NASA's MERRA-2 Reanalysis

    NASA Astrophysics Data System (ADS)

    Knowland, K. E.; Ott, L. E.; Duncan, B. N.; Wargan, K.; Hodges, K.

    2017-12-01

    Stratospheric intrusions - the introduction of ozone-rich stratospheric air into the troposphere - have been linked with surface ozone air quality exceedances, especially at the high elevations in the western USA in springtime. However, the impact of stratospheric intrusions in the remaining seasons and over the rest of the USA is less clear. A new approach to the study of stratospheric intrusions uses NASA's Goddard Earth Observing System Model (GEOS) model and assimilation products with an objective feature tracking algorithm to investigate the atmospheric dynamics that generate stratospheric intrusions and the different mechanisms through which stratospheric intrusions may influence tropospheric chemistry and surface air quality seasonally over both the western and the eastern USA. A catalog of stratospheric intrusions identified in the MERRA-2 reanalysis was produced for the period 2005-2014 and validated against surface ozone observations (focusing on those which exceed the national air quality standard) and a recent data set of stratospheric intrusion-influenced air quality exceedance flags from the US Environmental Protection Agency (EPA). Considering not all ozone exceedances have been flagged by the EPA, a collection of stratospheric intrusions can support air quality agencies for more rapid identification of the impact of stratospheric air on surface ozone and demonstrates that future operational analyses may aid in forecasting such events. An analysis of the spatiotemporal variability of stratospheric intrusions over the continental US was performed, and while the spring over the western USA does exhibit the largest number of stratospheric intrusions affecting the lower troposphere, the number of intrusions in the remaining seasons and over the eastern USA is sizable. By focusing on the major modes of variability that influence weather in the USA, such as the Pacific North American (PNA) teleconnection index, predicative meteorological patterns

  16. A self-sustaining process model of inertial layer dynamics in high Reynolds number turbulent wall flows.

    PubMed

    Chini, G P; Montemuro, B; White, C M; Klewicki, J

    2017-03-13

    Field observations and laboratory experiments suggest that at high Reynolds numbers Re the outer region of turbulent boundary layers self-organizes into quasi-uniform momentum zones (UMZs) separated by internal shear layers termed 'vortical fissures' (VFs). Motivated by this emergent structure, a conceptual model is proposed with dynamical components that collectively have the potential to generate a self-sustaining interaction between a single VF and adjacent UMZs. A large-Re asymptotic analysis of the governing incompressible Navier-Stokes equation is performed to derive reduced equation sets for the streamwise-averaged and streamwise-fluctuating flow within the VF and UMZs. The simplified equations reveal the dominant physics within-and isolate possible coupling mechanisms among-these different regions of the flow.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  17. To identify or not to identify parathyroid glands during total thyroidectomy.

    PubMed

    Chang, Yuk Kwan; Lang, Brian H H

    2017-12-01

    Hypoparathyroidism is one of the most common complications after total thyroidectomy and may impose a significant burden to both the patient and clinician. The extent of thyroid resection, surgical techniques, concomitant central neck dissection, parathyroid gland (PG) autotransplantation and inadvertent parathyroidectomy have long been some of the risk factors for postoperative hypoparathyroidism. Although routine identification of PGs has traditionally been advocated by surgeons, recent evidence has suggested that perhaps identifying fewer number of in situ PGs during surgery (i.e., selective identification) may further lower the risk of hypoparathyroidism. One explanation is that visual identification may often lead to subtle damages to the nearby blood supply of the in situ PGs and that may increase the risk of hypoparathyroidism. However, it is worth highlighting the current literature supporting either approach (i.e., routine vs. selective) remains scarce and because of the significant differences in study design, inclusions, definitions and management protocol between studies, a pooled analysis on this important but controversial topic remains an impossible task. Furthermore, it is worth nothing that identification of PGs does not equal safe preservation, as some studies demonstrated that it is not the number of PGs identified, but the number of PG preserved in situ that matters. Therefore a non-invasive, objective and reliable way to localize PGs and assess their viability intra-operatively is warranted. In this aspect, modern technology such as the indocyanine green (ICG) as near-infrared fluorescent dye for real-time in situ PG perfusion monitoring may have a potential role in the future.

  18. Integrative genomics identifies molecular alterations that challenge the linear model of melanoma progression.

    PubMed

    Rose, Amy E; Poliseno, Laura; Wang, Jinhua; Clark, Michael; Pearlman, Alexander; Wang, Guimin; Vega Y Saenz de Miera, Eleazar C; Medicherla, Ratna; Christos, Paul J; Shapiro, Richard; Pavlick, Anna; Darvishian, Farbod; Zavadil, Jiri; Polsky, David; Hernando, Eva; Ostrer, Harry; Osman, Iman

    2011-04-01

    Superficial spreading melanoma (SSM) and nodular melanoma (NM) are believed to represent sequential phases of linear progression from radial to vertical growth. Several lines of clinical, pathologic, and epidemiologic evidence suggest, however, that SSM and NM might be the result of independent pathways of tumor development. We utilized an integrative genomic approach that combines single nucleotide polymorphism array (6.0; Affymetrix) with gene expression array (U133A 2.0; Affymetrix) to examine molecular differences between SSM and NM. Pathway analysis of the most differentially expressed genes between SSM and NM (N = 114) revealed significant differences related to metabolic processes. We identified 8 genes (DIS3, FGFR1OP, G3BP2, GALNT7, MTAP, SEC23IP, USO1, and ZNF668) in which NM/SSM-specific copy number alterations correlated with differential gene expression (P < 0.05; Spearman's rank). SSM-specific genomic deletions in G3BP2, MTAP, and SEC23IP were independently verified in two external data sets. Forced overexpression of metabolism-related gene MTAP (methylthioadenosine phosphorylase) in SSM resulted in reduced cell growth. The differential expression of another metabolic-related gene, aldehyde dehydrogenase 7A1 (ALDH7A1), was validated at the protein level by using tissue microarrays of human melanoma. In addition, we show that the decreased ALDH7A1 expression in SSM may be the result of epigenetic modifications. Our data reveal recurrent genomic deletions in SSM not present in NM, which challenge the linear model of melanoma progression. Furthermore, our data suggest a role for altered regulation of metabolism-related genes as a possible cause of the different clinical behavior of SSM and NM.

  19. Integrative genomics identifies molecular alterations that challenge the linear model of melanoma progression

    PubMed Central

    Rose, Amy E.; Poliseno, Laura; Wang, Jinhua; Clark, Michael; Pearlman, Alexander; Wang, Guimin; Vega y Saenz de Miera, Eleazar C.; Medicherla, Ratna; Christos, Paul J.; Shapiro, Richard; Pavlick, Anna; Darvishian, Farbod; Zavadil, Jiri; Polsky, David; Hernando, Eva; Ostrer, Harry; Osman, Iman

    2011-01-01

    Superficial spreading melanoma (SSM) and nodular melanoma (NM) are believed to represent sequential phases of linear progression from radial to vertical growth. Several lines of clinical, pathological and epidemiologic evidence suggest, however, that SSM and NM might be the result of independent pathways of tumor development. We utilized an integrative genomic approach that combines single nucleotide polymorphism array (SNP 6.0, Affymetrix) with gene expression array (U133A 2.0, Affymetrix) to examine molecular differences between SSM and NM. Pathway analysis of the most differentially expressed genes between SSM and NM (N=114) revealed significant differences related to metabolic processes. We identified 8 genes (DIS3, FGFR1OP, G3BP2, GALNT7, MTAP, SEC23IP, USO1, ZNF668) in which NM/SSM-specific copy number alterations correlated with differential gene expression (P<0.05, Spearman’s rank). SSM-specific genomic deletions in G3BP2, MTAP, and SEC23IP were independently verified in two external data sets. Forced overexpression of metabolism-related gene methylthioadenosine phosphorylase (MTAP) in SSM resulted in reduced cell growth. The differential expression of another metabolic related gene, aldehyde dehydrogenase 7A1 (ALDH7A1), was validated at the protein level using tissue microarrays of human melanoma. In addition, we show that the decreased ALDH7A1 expression in SSM may be the result of epigenetic modifications. Our data reveal recurrent genomic deletions in SSM not present in NM, which challenge the linear model of melanoma progression. Furthermore, our data suggest a role for altered regulation of metabolism-related genes as a possible cause of the different clinical behavior of SSM and NM. PMID:21343389

  20. Effects of sample size, number of markers, and allelic richness on the detection of spatial genetic pattern

    USGS Publications Warehouse

    Landguth, Erin L.; Gedy, Bradley C.; Oyler-McCance, Sara J.; Garey, Andrew L.; Emel, Sarah L.; Mumma, Matthew; Wagner, Helene H.; Fortin, Marie-Josée; Cushman, Samuel A.

    2012-01-01

    The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals.

  1. Effects of sample size, number of markers, and allelic richness on the detection of spatial genetic pattern

    USGS Publications Warehouse

    Landguth, E.L.; Fedy, B.C.; Oyler-McCance, S.J.; Garey, A.L.; Emel, S.L.; Mumma, M.; Wagner, H.H.; Fortin, M.-J.; Cushman, S.A.

    2012-01-01

    The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals. ?? 2011 Blackwell Publishing Ltd.

  2. Identifying Multiple Populations in M71 using CN

    NASA Astrophysics Data System (ADS)

    Gerber, Jeffrey M.; Friel, Eileen D.; Vesperini, Enrico

    2018-01-01

    It is now well established that globular clusters (GCs) host multiple stellar populations characterized by differences in several light elements. While these populations have been found in nearly all GCs, we still lack an entirely successful model to explain their formation. A key constraint to these models is the detailed pattern of light element abundances seen among the populations; different techniques for identifying these populations probe different elements and do not always yield the same results. We study a large sample of stars in the GC M71 for light elements C and N, using the CN and CH band strength to identify multiple populations. Our measurements come from low-resolution spectroscopy obtained with the WIYN-3.5m telescope for ~150 stars from the tip of the red-giant branch down to the main-sequence turn-off. The large number of stars and broad spatial coverage of our sample (out to ~3.5 half-light radii) allows us to carry out a comprehensive characterization of the multiple populations in M71. We use a combination of the various spectroscopic and photometric indicators to draw a more complete picture of the properties of the populations and to investigate the consistency of classifications using different techniques.

  3. Cold spray nozzle mach number limitation

    NASA Astrophysics Data System (ADS)

    Jodoin, B.

    2002-12-01

    The classic one-dimensional isentropic flow approach is used along with a two-dimensional axisymmetric numerical model to show that the exit Mach number of a cold spray nozzle should be limited due to two factors. To show this, the two-dimensional model is validated with experimental data. Although both models show that the stagnation temperature is an important limiting factor, the one-dimensional approach fails to show how important the shock-particle interactions are at limiting the nozzle Mach number. It is concluded that for an air nozzle spraying solid powder particles, the nozzle Mach number should be set between 1.5 and 3 to limit the negative effects of the high stagnation temperature and of the shock-particle interactions.

  4. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  5. Representational Flexibility and Problem-Solving Ability in Fraction and Decimal Number Addition: A Structural Model

    ERIC Educational Resources Information Center

    Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti

    2016-01-01

    The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…

  6. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    PubMed Central

    Lin, Chien-Hsing; Li, Ling-Hui; Ho, Sheng-Feng; Chuang, Tzu-Po; Wu, Jer-Yuarn; Chen, Yuan-Tsong; Fann, Cathy SJ

    2008-01-01

    Background Copy number variations (CNVs) have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT) is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83%) had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb) and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations. PMID:19108714

  7. Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.

    PubMed

    Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian

    2015-10-01

    Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 < 6 and mm < cm; incompatible: 3 cm_6 mm with 3 < 6 but cm > mm) as well as string length congruity (congruent: 1 m_2 km with m < km and 2 < 3 characters; incongruent: 2 mm_1 m with mm < m, but 3 > 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.

  8. Droplet Depinning on Inclined Surfaces at High Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    White, Edward; Singh, Natasha; Lee, Sungyon

    2017-11-01

    Contact angle hysteresis enables a sessile liquid drop to adhere to a solid surface when the surface is inclined, the drop is exposed to gas-phase flow, or the drop is exposed to both forcing modalities. Previous work by Schmucker and White (2012.DFD.M4.6) identified critical depinning Weber numbers for water drops subject to gravity- and wind-dominated forcing. This work extends the Schmucker and White data and finds the critical depinning Weber number obeys a two-slope linear model. Under pure wind forcing at Reynolds numbers above 1500 and with zero surface inclination, Wecrit = 8.0 . For non-zero inclinations, α, Wecrit decreases proportionally to A Bo sinα where A is the drop aspect ratio and Bo is its Bond number. The same relationship holds for α < 0 when gravity resists depinning by wind. Above We 4 , depinning is dominated by wind forcing; at We < 4 , depinning is gravity dominated. While Wecrit depends linearly on A Bo sinα in both forcing regimes, the slopes of the the limit lines depend on the forcing regime. The difference is attributed to different drop shapes and contact angle distributions that arise depending on whether wind or gravity dominates the depinning behavior. Supported by the National Science Foundation through Grant CBET-1605947.

  9. A numerical model for the simulation of low Mach number gas-liquid flows

    NASA Astrophysics Data System (ADS)

    Daru, V.; Duluc, M.-C.; Le Quéré, P.; Juric, D.

    2010-03-01

    This work is devoted to the numerical simulation of gas-liquid flows. The liquid phase is considered as incompressible, while the gas phase is treated as compressible in the low Mach number approach. We present a model and a numerical method aimed at the computation of such two-phase flows. The numerical model uses a lagrangian front-tracking method to deal with the interface. The model being validated with a 1-D reference solution, results in the 2-D case are presented. Two air bubbles are enclosed in a rigid cavity and surrounded with liquid water. As the initial pressure of the two bubbles is set to different values, an oscillatory motion is induced in which the bubbles undergo alternate compression and dilatation associated with alternate internal heating and cooling. This oscillatory motion can not be sustained and a damping is finally observed. It is shown in the present work that thermal conductivity of the liquid has a significant effect on both the frequency and the damping time scale of the oscillations.

  10. Neurobiological constraints on behavioral models of motivation.

    PubMed

    Nader, K; Bechara, A; van der Kooy, D

    1997-01-01

    The application of neurobiological tools to behavioral questions has produced a number of working models of the mechanisms mediating the rewarding and aversive properties of stimuli. The authors review and compare three models that differ in the nature and number of the processes identified. The dopamine hypothesis, a single system model, posits that the neurotransmitter dopamine plays a fundamental role in mediating the rewarding properties of all classes of stimuli. In contrast, both nondeprived/deprived and saliency attribution models claim that separate systems make independent contributions to reward. The former identifies the psychological boundary defined by the two systems as being between states of nondeprivation (e.g. food sated) and deprivation (e.g. hunger). The latter identifies a boundary between liking and wanting systems. Neurobiological dissociations provide tests of and explanatory power for behavioral theories of goal-directed behavior.

  11. Analysis of Perioperative Chemotherapy in Resected Pancreatic Cancer: Identifying the Number and Sequence of Chemotherapy Cycles Needed to Optimize Survival.

    PubMed

    Epelboym, Irene; Zenati, Mazen S; Hamad, Ahmad; Steve, Jennifer; Lee, Kenneth K; Bahary, Nathan; Hogg, Melissa E; Zeh, Herbert J; Zureikat, Amer H

    2017-09-01

    Receipt of 6 cycles of adjuvant chemotherapy (AC) is standard of care in pancreatic cancer (PC). Neoadjuvant chemotherapy (NAC) is increasingly utilized; however, optimal number of cycles needed alone or in combination with AC remains unknown. We sought to determine the optimal number and sequence of perioperative chemotherapy cycles in PC. Single institutional review of all resected PCs from 2008 to 2015. The impact of cumulative number of chemotherapy cycles received (0, 1-5, and ≥6 cycles) and their sequence (NAC, AC, or NAC + AC) on overall survival was evaluated Cox-proportional hazard modeling, using 6 cycles of AC as reference. A total of 522 patients were analyzed. Based on sample size distribution, four combinations were evaluated: 0 cycles = 12.1%, 1-5 cycles of combined NAC + AC = 29%, 6 cycles of AC = 25%, and ≥6 cycles of combined NAC + AC = 34%, with corresponding survival. 13.1, 18.5, 37, and 36.8 months. On MVA (P < 0.0001), tumor stage [hazard ratio (HR) 1.35], LNR (HR 4.3), and R1 margins (HR 1.77) were associated with increased hazard of death. Compared with 6 cycles AC, receipt of 0 cycles [HR 3.57, confidence interval (CI) 2.47-5.18] or 1-5 cycles in any combination (HR 2.37, CI 1.73-3.23) was associated with increased hazard of death, whereas receipt of ≥6 cycles in any sequence was associated with optimal and comparable survival (HR 1.07, CI 0.78-1.47). Receipt of 6 or more perioperative cycles of chemotherapy either as combined neoadjuvant and adjuvant or adjuvant alone may be associated with optimal and comparable survival in resected PC.

  12. Identifying Developmental Zones in Maize Lateral Root Cell Length Profiles using Multiple Change-Point Models

    PubMed Central

    Moreno-Ortega, Beatriz; Fort, Guillaume; Muller, Bertrand; Guédon, Yann

    2017-01-01

    The identification of the limits between the cell division, elongation and mature zones in the root apex is still a matter of controversy when methods based on cellular features, molecular markers or kinematics are compared while methods based on cell length profiles have been comparatively underexplored. Segmentation models were developed to identify developmental zones within a root apex on the basis of epidermal cell length profiles. Heteroscedastic piecewise linear models were estimated for maize lateral roots of various lengths of both wild type and two mutants affected in auxin signaling (rtcs and rum-1). The outputs of these individual root analyses combined with morphological features (first root hair position and root diameter) were then globally analyzed using principal component analysis. Three zones corresponding to the division zone, the elongation zone and the mature zone were identified in most lateral roots while division zone and sometimes elongation zone were missing in arrested roots. Our results are consistent with an auxin-dependent coordination between cell flux, cell elongation and cell differentiation. The proposed segmentation models could extend our knowledge of developmental regulations in longitudinally organized plant organs such as roots, monocot leaves or internodes. PMID:29123533

  13. Increased mast cell numbers in a calcaneal tendon overuse model.

    PubMed

    Pingel, J; Wienecke, J; Kongsgaard, M; Behzad, H; Abraham, T; Langberg, H; Scott, A

    2013-12-01

    Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar rats were divided in two groups: running group (n = 12); sedentary control group (n = 12). The running-group was exposed to the HIUR exercise protocol for 7 weeks. The calcaneal tendons of both hind limbs were dissected. The right tendon was used for histologic analysis using Bonar score, immunohistochemistry, and second harmonic generation microscopy (SHGM). The left tendon was used for quantitative polymerase chain reaction (qPCR) analysis. An increased tendon cell density in the runners were observed compared to the controls (P = 0.05). Further, the intensity of immunostaining of protein kinase B, P = 0.03; 2.75 ± 0.54 vs 1.17 ± 0.53, was increased in the runners. The Bonar score (P = 0.05), and the number of mast cells (P = 0.02) were significantly higher in the runners compared to the controls. Furthermore, SHGM showed focal collagen disorganization in the runners, and reduced collagen density (P = 0.03). IL-3 mRNA levels were correlated with mast cell number in sedentary animals. The qPCR analysis showed no significant differences between the groups in the other analyzed targets. The current study demonstrates that 7-week HIUR causes structural changes in the calcaneal tendon, and further that these changes are associated with an increased mast cell density. © 2013 The Authors. Scand J Med Sci Sports published by John Wiley & Sons Ltd.

  14. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models.

    PubMed

    Sae-Lim, Panya; Komen, Hans; Kause, Antti; Mulder, Han A

    2014-02-26

    Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Day*Degree and photoperiod were identified as environmental

  15. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models

    PubMed Central

    2014-01-01

    Background Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Methods Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. Results The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Conclusions Day*Degree and

  16. Blocking probability in the hose-model optical VPN with different number of wavelengths

    NASA Astrophysics Data System (ADS)

    Roslyakov, Alexander V.

    2017-04-01

    Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.

  17. Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model

    NASA Astrophysics Data System (ADS)

    Manderla, M.; Kiniger, K.; Koutnik, J.

    2014-03-01

    Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.

  18. Identifying Successful Advancement Approaches in Four Catholic Universities: The Effectiveness of the Four Advancement Models of Communication

    ERIC Educational Resources Information Center

    Bonglia, Jean-Pierre K.

    2010-01-01

    The current longitudinal study of the most successful Catholic universities in the United States identifies the prevalence of four advancement models of communication that have contributed to make those institutions successful in their philanthropic efforts. While research by Grunig and Kelly maintained that the two-way symmetrical model of…

  19. Innate or Acquired? - Disentangling Number Sense and Early Number Competencies.

    PubMed

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se . To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  20. Identifying unusual performance in Australian and New Zealand intensive care units from 2000 to 2010.

    PubMed

    Solomon, Patricia J; Kasza, Jessica; Moran, John L

    2014-04-22

    The Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) collects voluntary data on patient admissions to Australian and New Zealand intensive care units (ICUs). This paper presents an in-depth statistical analysis of risk-adjusted mortality of ICU admissions from 2000 to 2010 for the purpose of identifying ICUs with unusual performance. A cohort of 523,462 patients from 144 ICUs was analysed. For each ICU, the natural logarithm of the standardised mortality ratio (log-SMR) was estimated from a risk-adjusted, three-level hierarchical model. This is the first time a three-level model has been fitted to such a large ICU database anywhere. The analysis was conducted in three stages which included the estimation of a null distribution to describe usual ICU performance. Log-SMRs with appropriate estimates of standard errors are presented in a funnel plot using 5% false discovery rate thresholds. False coverage-statement rate confidence intervals are also presented. The observed numbers of deaths for ICUs identified as unusual are compared to the predicted true worst numbers of deaths under the model for usual ICU performance. Seven ICUs were identified as performing unusually over the period 2000 to 2010, in particular, demonstrating high risk-adjusted mortality compared to the majority of ICUs. Four of the seven were ICUs in private hospitals. Our three-stage approach to the analysis detected outlying ICUs which were not identified in a conventional (single) risk-adjusted model for mortality using SMRs to compare ICUs. We also observed a significant linear decline in mortality over the decade. Distinct yearly and weekly respiratory seasonal effects were observed across regions of Australia and New Zealand for the first time. The statistical approach proposed in this paper is intended to be used for the review of observed ICU and hospital mortality. Two important messages from our study are firstly, that comprehensive risk

  1. Identifying unusual performance in Australian and New Zealand intensive care units from 2000 to 2010

    PubMed Central

    2014-01-01

    Background The Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) collects voluntary data on patient admissions to Australian and New Zealand intensive care units (ICUs). This paper presents an in-depth statistical analysis of risk-adjusted mortality of ICU admissions from 2000 to 2010 for the purpose of identifying ICUs with unusual performance. Methods A cohort of 523,462 patients from 144 ICUs was analysed. For each ICU, the natural logarithm of the standardised mortality ratio (log-SMR) was estimated from a risk-adjusted, three-level hierarchical model. This is the first time a three-level model has been fitted to such a large ICU database anywhere. The analysis was conducted in three stages which included the estimation of a null distribution to describe usual ICU performance. Log-SMRs with appropriate estimates of standard errors are presented in a funnel plot using 5% false discovery rate thresholds. False coverage-statement rate confidence intervals are also presented. The observed numbers of deaths for ICUs identified as unusual are compared to the predicted true worst numbers of deaths under the model for usual ICU performance. Results Seven ICUs were identified as performing unusually over the period 2000 to 2010, in particular, demonstrating high risk-adjusted mortality compared to the majority of ICUs. Four of the seven were ICUs in private hospitals. Our three-stage approach to the analysis detected outlying ICUs which were not identified in a conventional (single) risk-adjusted model for mortality using SMRs to compare ICUs. We also observed a significant linear decline in mortality over the decade. Distinct yearly and weekly respiratory seasonal effects were observed across regions of Australia and New Zealand for the first time. Conclusions The statistical approach proposed in this paper is intended to be used for the review of observed ICU and hospital mortality. Two important messages from our study are

  2. Model and Scenario Variations in Predicted Number of Generations of Spodoptera litura Fab. on Peanut during Future Climate Change Scenario

    PubMed Central

    Srinivasa Rao, Mathukumalli; Swathi, Pettem; Rama Rao, Chitiprolu Anantha; Rao, K. V.; Raju, B. M. K.; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka

    2015-01-01

    The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)—2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1–2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18–22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods. PMID:25671564

  3. 41 CFR 101-30.101-3 - National stock number.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true National stock number....1-General § 101-30.101-3 National stock number. The national stock number (NSN) is the identifying number assigned to each item of supply. The NSN consists of the 4-digit Federal Supply Classification...

  4. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  5. Studies of the Effects of Perfluorocarbon Emulsions on Platelet Number and Function in Models of Critical Battlefield Injury

    DTIC Science & Technology

    2015-01-01

    Year three: Using the ovine polytrauma model of combined hemorrhagic shock and blast TBI to test the effect of PFC intravenous infusion on platelet...could not be reassembled until late October. The schedule for testing and developing a sheep polytrauma model which combines blast injury with...This research project going forward is to assess PFC’s effect on platelet number and function in sheep 9 10 polytrauma model which combined blast

  6. Determining the standoff distance of the bow shock: Mach number dependence and use of models

    NASA Technical Reports Server (NTRS)

    Farris, M. H.; Russell, C. T.

    1994-01-01

    We explore the factors that determine the bow shock standoff distance. These factors include the parameters of the solar wind, as well as the size and shape of the obstacle. In this report we develop a semiempirical Mach number relation for the bow shock standoff distance in order to take into account the shock's behavior at low Mach numbers. This is done by determining which properties of the shock are most important in controlling the standoff distance and using this knowledge to modify the current Mach number relation. While the present relation has proven useful at higher Mach numbers, it has lacked effectiveness at the low Mach number limit. We also analyze the bow shock dependence upon the size and shape of the obstacle, noting that it is most appropriate to compare the standoff distance of the bow shock to the radius of curvature of the obstacle, as opposed to the distance from the focus of the object to the nose. Last, we focus our attention on the use of bow shock models in determining the standoff distance. We note that the physical behavior of the shock must correctly be taken into account, specifically the behavior as a function of solar wind dynamic pressure; otherwise, erroneous results can be obtained for the bow shock standoff distance.

  7. A Simulation Based Analysis of Motor Unit Number Index (MUNIX) Technique Using Motoneuron Pool and Surface Electromyogram Models

    PubMed Central

    Li, Xiaoyan; Rymer, William Zev; Zhou, Ping

    2013-01-01

    Motor unit number index (MUNIX) measurement has recently achieved increasing attention as a tool to evaluate the progression of motoneuron diseases. In our current study, the sensitivity of the MUNIX technique to changes in motoneuron and muscle properties was explored by a simulation approach utilizing variations on published motoneuron pool and surface electromyogram (EMG) models. Our simulation results indicate that, when keeping motoneuron pool and muscle parameters unchanged and varying the input motor unit numbers to the model, then MUNIX estimates can appropriately characterize changes in motor unit numbers. Such MUNIX estimates are not sensitive to different motor unit recruitment and rate coding strategies used in the model. Furthermore, alterations in motor unit control properties do not have a significant effect on the MUNIX estimates. Neither adjustment of the motor unit recruitment range nor reduction of the motor unit firing rates jeopardizes the MUNIX estimates. The MUNIX estimates closely correlate with the maximum M wave amplitude. However, if we reduce the amplitude of each motor unit action potential rather than simply reduce motor unit number, then MUNIX estimates substantially underestimate the motor unit numbers in the muscle. These findings suggest that the current MUNIX definition is most suitable for motoneuron diseases that demonstrate secondary evidence of muscle fiber reinnervation. In this regard, when MUNIX is applied, it is of much importance to examine a parallel measurement of motor unit size index (MUSIX), defined as the ratio of the maximum M wave amplitude to the MUNIX. However, there are potential limitations in the application of the MUNIX methods in atrophied muscle, where it is unclear whether the atrophy is accompanied by loss of motor units or loss of muscle fiber size. PMID:22514208

  8. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  9. [Frequency distribution of dibucaine numbers in 24,830 patients].

    PubMed

    Pestel, G; Sprenger, H; Rothhammer, A

    2003-06-01

    Atypical cholinesterase prolongs the duration of neuromuscular blocking drugs such as succinylcholine and mivacurium. Measuring the dibucaine number identifies patients who are at risk. This study shows the frequency distribution of dibucaine numbers routinely measured and discusses avoidable clinical problems and economic implications. Dibucaine numbers were measured on a Hitachi 917-analyzer and all dibucaine numbers recorded over a period of 4 years were taken into consideration. Repeat observations were excluded. A total of 24,830 dibucaine numbers were analysed and numbers below 30 were found in 0.07% ( n=18) giving an incidence of 1:1,400. Dibucaine numbers from 30 to 70 were found in 1.23% ( n=306). On the basis of identification of the Dibucaine numbers we could avoid the administration of succinylcholine or mivacurium resulting in a cost reduction of 12,280 Euro offset against the total laboratory costs amounting to 10,470 Euro. An incidence of 1:1,400 of dibucaine numbers below 30 is higher than documented in the literature. Therefore, routine measurement of dibucaine number is a cost-effective method of identifying patients at increased risk of prolonged neuromuscular blockade due to atypical cholinesterase.

  10. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression

    PubMed Central

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-01-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  11. Simultaneous Video-EEG-ECG Monitoring to Identify Neurocardiac Dysfunction in Mouse Models of Epilepsy.

    PubMed

    Mishra, Vikas; Gautier, Nicole M; Glasscock, Edward

    2018-01-29

    In epilepsy, seizures can evoke cardiac rhythm disturbances such as heart rate changes, conduction blocks, asystoles, and arrhythmias, which can potentially increase risk of sudden unexpected death in epilepsy (SUDEP). Electroencephalography (EEG) and electrocardiography (ECG) are widely used clinical diagnostic tools to monitor for abnormal brain and cardiac rhythms in patients. Here, a technique to simultaneously record video, EEG, and ECG in mice to measure behavior, brain, and cardiac activities, respectively, is described. The technique described herein utilizes a tethered (i.e., wired) recording configuration in which the implanted electrode on the head of the mouse is hard-wired to the recording equipment. Compared to wireless telemetry recording systems, the tethered arrangement possesses several technical advantages such as a greater possible number of channels for recording EEG or other biopotentials; lower electrode costs; and greater frequency bandwidth (i.e., sampling rate) of recordings. The basics of this technique can also be easily modified to accommodate recording other biosignals, such as electromyography (EMG) or plethysmography for assessment of muscle and respiratory activity, respectively. In addition to describing how to perform the EEG-ECG recordings, we also detail methods to quantify the resulting data for seizures, EEG spectral power, cardiac function, and heart rate variability, which we demonstrate in an example experiment using a mouse with epilepsy due to Kcna1 gene deletion. Video-EEG-ECG monitoring in mouse models of epilepsy or other neurological disease provides a powerful tool to identify dysfunction at the level of the brain, heart, or brain-heart interactions.

  12. Calculations of wall shear stress in harmonically oscillated turbulent pipe flow using a low-Reynolds-number {kappa}-{epsilon} model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismael, J.O.; Cotton, M.A.

    1996-03-01

    The low-Reynolds-number {kappa}-{epsilon} turbulence model of Launder and Sharma is applied to the calculation of wall shear stress in spatially fully-developed turbulent pipe flow oscillated at small amplitudes. It is believed that the present study represents the first systematic evaluation of the turbulence closure under consideration over a wide range of frequency. Model results are well correlated in terms of the parameter {omega}{sup +} = {omega}{nu}/{bar U}{sub {tau}}{sup 2} at high frequencies, whereas at low frequencies there is an additional Reynolds number dependence. Comparison is made with the experimental data of Finnicum and Hanratty.

  13. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  14. Proteomic Analysis to Identify Functional Molecules in Drug Resistance Caused by E-Cadherin Knockdown in 3D-Cultured Colorectal Cancer Models

    DTIC Science & Technology

    2014-09-01

    total number of 538 phosphopeptides were identified, among which 350 phosphopeptides had been identified with the first round of TiO2 enrichment and 430...year research and the collection of proteomic and phosphoproteomic data is still in process. PRODUCTS Manuscripts: Yue XS , Hummon AB. Combining...of IMAC and TiO2 enrichment methods to increase phosphoproteomic identifications, manuscript in preparation. Yue XS , Hummon AB. Proteomic and

  15. Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung

    NASA Astrophysics Data System (ADS)

    Yong, Benny; Chin, Liem

    2017-05-01

    Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.

  16. Challenges of implementing collaborative models of decision making with trans-identified patients.

    PubMed

    Dewey, Jodie M

    2015-10-01

    Factors health providers face during the doctor-patient encounter both impede and assist the development of collaborative models of treatment. I investigated decision making among medical and therapeutic professionals who work with trans-identified patients to understand factors that might impede or facilitate the adoption of the collaborative decision-making model in their clinical work. Following a grounded theory approach, I collected and analysed data from semi-structured interviews with 10 U.S. physicians and 10 U.S. mental health professionals. Doctors and therapists often desire collaboration with their patients but experience dilemmas in treating the trans-identified patients. Dilemmas include lack of formal education, little to no institutional support and inconsistent understanding and application of the main documents used by professionals treating trans-patients. Providers face considerable risk in providing unconventional treatments due to the lack of institutional and academic support relating to the treatment for trans-people, and the varied interpretation and application of the diagnostic and treatment documents used in treating trans-people. To address this risk, the relationship with the patient becomes crucial. However, trust, a component required for collaboration, is thwarted when the patients feel obliged to present in ways aligned with these documents in order to receive desired treatments. When trust cannot be established, medical and mental health providers can and do delay or deny treatments, resulting in the imbalance of power between patient and provider. The documents created to assist in treatment actually thwart professional desire to work collaboratively with patients. © 2013 John Wiley & Sons Ltd.

  17. Identifying the decision to be supported: a review of papers from environmental modelling and software

    USGS Publications Warehouse

    Sojda, Richard S.; Chen, Serena H.; El Sawah, Sondoss; Guillaume, Joseph H.A.; Jakeman, A.J.; Lautenbach, Sven; McIntosh, Brian S.; Rizzoli, A.E.; Seppelt, Ralf; Struss, Peter; Voinov, Alexey; Volk, Martin

    2012-01-01

    Two of the basic tenets of decision support system efforts are to help identify and structure the decisions to be supported, and to then provide analysis in how those decisions might be best made. One example from wetland management would be that wildlife biologists must decide when to draw down water levels to optimise aquatic invertebrates as food for breeding ducks. Once such a decision is identified, a system or tool to help them make that decision in the face of current and projected climate conditions could be developed. We examined a random sample of 100 papers published from 2001-2011 in Environmental Modelling and Software that used the phrase “decision support system” or “decision support tool”, and which are characteristic of different sectors. In our review, 41% of the systems and tools related to the water resources sector, 34% were related to agriculture, and 22% to the conservation of fish, wildlife, and protected area management. Only 60% of the papers were deemed to be reporting on DSS. This was based on the papers reviewed not having directly identified a specific decision to be supported. We also report on the techniques that were used to identify the decisions, such as formal survey, focus group, expert opinion, or sole judgment of the author(s). The primary underlying modelling system, e.g., expert system, agent based model, Bayesian belief network, geographical information system (GIS), and the like was categorised next. Finally, since decision support typically should target some aspect of unstructured decisions, we subjectively determined to what degree this was the case. In only 23% of the papers reviewed, did the system appear to tackle unstructured decisions. This knowledge should be useful in helping workers in the field develop more effective systems and tools, especially by being exposed to the approaches in different, but related, disciplines. We propose that a standard blueprint for reporting on DSS be developed for

  18. Development of a model system to identify differences in spring and winter oat.

    PubMed

    Chawade, Aakash; Lindén, Pernilla; Bräutigam, Marcus; Jonsson, Rickard; Jonsson, Anders; Moritz, Thomas; Olsson, Olof

    2012-01-01

    Our long-term goal is to develop a Swedish winter oat (Avena sativa). To identify molecular differences that correlate with winter hardiness, a winter oat model comprising of both non-hardy spring lines and winter hardy lines is needed. To achieve this, we selected 294 oat breeding lines, originating from various Russian, German, and American winter oat breeding programs and tested them in the field in south- and western Sweden. By assaying for winter survival and agricultural properties during four consecutive seasons, we identified 14 breeding lines of different origins that not only survived the winter but also were agronomically better than the rest. Laboratory tests including electrolytic leakage, controlled crown freezing assay, expression analysis of the AsVrn1 gene and monitoring of flowering time suggested that the American lines had the highest freezing tolerance, although the German lines performed better in the field. Finally, six lines constituting the two most freezing tolerant lines, two intermediate lines and two spring cultivars were chosen to build a winter oat model system. Metabolic profiling of non-acclimated and cold acclimated leaf tissue samples isolated from the six selected lines revealed differential expression patterns of 245 metabolites including several sugars, amino acids, organic acids and 181 hitherto unknown metabolites. The expression patterns of 107 metabolites showed significant interactions with either a cultivar or a time-point. Further identification, characterisation and validation of these metabolites will lead to an increased understanding of the cold acclimation process in oats. Furthermore, by using the winter oat model system, differential sequencing of crown mRNA populations would lead to identification of various biomarkers to facilitate winter oat breeding.

  19. Parallel approach to identifying the well-test interpretation model using a neurocomputer

    NASA Astrophysics Data System (ADS)

    May, Edward A., Jr.; Dagli, Cihan H.

    1996-03-01

    The well test is one of the primary diagnostic and predictive tools used in the analysis of oil and gas wells. In these tests, a pressure recording device is placed in the well and the pressure response is recorded over time under controlled flow conditions. The interpreted results are indicators of the well's ability to flow and the damage done to the formation surrounding the wellbore during drilling and completion. The results are used for many purposes, including reservoir modeling (simulation) and economic forecasting. The first step in the analysis is the identification of the Well-Test Interpretation (WTI) model, which determines the appropriate solution method. Mis-identification of the WTI model occurs due to noise and non-ideal reservoir conditions. Previous studies have shown that a feed-forward neural network using the backpropagation algorithm can be used to identify the WTI model. One of the drawbacks to this approach is, however, training time, which can run into days of CPU time on personal computers. In this paper a similar neural network is applied using both a personal computer and a neurocomputer. Input data processing, network design, and performance are discussed and compared. The results show that the neurocomputer greatly eases the burden of training and allows the network to outperform a similar network running on a personal computer.

  20. CCL3L1 copy number and susceptibility to malaria

    PubMed Central

    Carpenter, Danielle; Färnert, Anna; Rooth, Ingegerd; Armour, John A.L.; Shaw, Marie-Anne

    2012-01-01

    Copy number variation can contribute to the variation observed in susceptibility to complex diseases. Here we present the first study to investigate copy number variation of the chemokine gene CCL3L1 with susceptibility to malaria. We present a family-based genetic analysis of a Tanzanian population (n = 922), using parasite load, mean number of clinical infections of malaria and haemoglobin levels as phenotypes. Copy number of CCL3L1 was measured using the paralogue ratio test (PRT) and the dataset exhibited copy numbers ranging between 1 and 10 copies per diploid genome (pdg). Association between copy number and phenotypes was assessed. Furthermore, we were able to identify copy number haplotypes in some families, using microsatellites within the copy variable region, for transmission disequilibrium testing. We identified a high level of copy number haplotype diversity and find some evidence for an association of low CCL3L1 copy number with protection from anaemia. PMID:22484763