Sample records for additive model based

  1. A simulations approach for meta-analysis of genetic association studies based on additive genetic model.

    PubMed

    John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping

    2018-06-01

    Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.

  2. 3D model of filler melting with micro-beam plasma arc based on additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Yang, Tao; Yang, Ruixin

    2017-07-01

    Additive manufacturing technology is a systematic process based on discrete-accumulation principle, which is derived by the dimension of parts. Aiming at the dimension mathematical model and slicing problems in additive manufacturing process, the constitutive relations between micro-beam plasma welding parameters and the dimension of part were investigated. The slicing algorithm and slicing were also studied based on the dimension characteristics. By using the direct slicing algorithm according to the geometric characteristics of model, a hollow thin-wall spherical part was fabricated by 3D additive manufacturing technology using micro-beam plasma.

  3. Unraveling additive from nonadditive effects using genomic relationship matrices.

    PubMed

    Muñoz, Patricio R; Resende, Marcio F R; Gezan, Salvador A; Resende, Marcos Deon Vilela; de Los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F

    2014-12-01

    The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. Copyright © 2014 by the Genetics Society of America.

  4. Implementation of the Realized Genomic Relationship Matrix to Open-Pollinated White Spruce Family Testing for Disentangling Additive from Nonadditive Genetic Effects

    PubMed Central

    Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.

    2016-01-01

    The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647

  5. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  6. An electrical circuit model for additive-modified SnO2 ceramics

    NASA Astrophysics Data System (ADS)

    Karami Horastani, Zahra; Alaei, Reza; Karami, Amirhossein

    2018-05-01

    In this paper an electrical circuit model for additive-modified metal oxide ceramics based on their physical structures and electrical resistivities is presented. The model predicts resistance of the sample at different additive concentrations and different temperatures. To evaluate the model two types of composite ceramics, SWCNT/SnO2 with SWCNT concentrations of 0.3, 0.6, 1.2, 2.4 and 3.8%wt, and Ag/SnO2 with Ag concentrations of 0.3, 0.5, 0.8 and 1.5%wt, were prepared and their electrical resistances versus temperature were experimentally measured. It is shown that the experimental data are in good agreement with the results obtained from the model. The proposed model can be used in the design process of ceramic-based gas sensors, and it also clarifies the role of additive in gas sensing process of additive-modified metal oxide gas sensors. Furthermore the model can be used in the system level modeling of designs in which these sensors are also present.

  7. NED-IIS: An Intelligent Information System for Forest Ecosystem Management

    Treesearch

    W.D. Potter; S. Somasekar; R. Kommineni; H.M. Rauscher

    1999-01-01

    We view Intelligent Information System (IIS) as composed of a unified knowledge base, database, and model base. The model base includes decision support models, forecasting models, and cvsualization models for example. In addition, we feel that the model base should include domain specific porblems solving modules as well as decision support models. This, then,...

  8. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  9. Research on Capacity Addition using Market Model with Transmission Congestion under Competitive Environment

    NASA Astrophysics Data System (ADS)

    Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko

    In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.

  10. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  11. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  12. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  13. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  14. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  15. Amino-Acid Network Clique Analysis of Protein Mutation Non-Additive Effects: A Case Study of Lysozme.

    PubMed

    Ming, Dengming; Chen, Rui; Huang, He

    2018-05-10

    Optimizing amino-acid mutations in enzyme design has been a very challenging task in modern bio-industrial applications. It is well known that many successful designs often hinge on extensive correlations among mutations at different sites within the enzyme, however, the underpinning mechanism for these correlations is far from clear. Here, we present a topology-based model to quantitively characterize non-additive effects between mutations. The method is based on the molecular dynamic simulations and the amino-acid network clique analysis. It examines if the two mutation sites of a double-site mutation fall into to a 3-clique structure, and associates such topological property of mutational site spatial distribution with mutation additivity features. We analyzed 13 dual mutations of T4 phage lysozyme and found that the clique-based model successfully distinguishes highly correlated or non-additive double-site mutations from those additive ones whose component mutations have less correlation. We also applied the model to protein Eglin c whose structural topology is significantly different from that of T4 phage lysozyme, and found that the model can, to some extension, still identify non-additive mutations from additive ones. Our calculations showed that mutation non-additive effects may heavily depend on a structural topology relationship between mutation sites, which can be quantitatively determined using amino-acid network k -cliques. We also showed that double-site mutation correlations can be significantly altered by exerting a third mutation, indicating that more detailed physicochemical interactions should be considered along with the network clique-based model for better understanding of this elusive mutation-correlation principle.

  16. The prediction of food additives in the fruit juice based on electronic nose with chemometrics.

    PubMed

    Qiu, Shanshan; Wang, Jun

    2017-09-01

    Food additives are added to products to enhance their taste, and preserve flavor or appearance. While their use should be restricted to achieve a technological benefit, the contents of food additives should be also strictly controlled. In this study, E-nose was applied as an alternative to traditional monitoring technologies for determining two food additives, namely benzoic acid and chitosan. For quantitative monitoring, support vector machine (SVM), random forest (RF), extreme learning machine (ELM) and partial least squares regression (PLSR) were applied to establish regression models between E-nose signals and the amount of food additives in fruit juices. The monitoring models based on ELM and RF reached higher correlation coefficients (R 2 s) and lower root mean square errors (RMSEs) than models based on PLSR and SVM. This work indicates that E-nose combined with RF or ELM can be a cost-effective, easy-to-build and rapid detection system for food additive monitoring. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Multi-allelic haplotype model based on genetic partition for genomic prediction and variance component estimation using SNP markers.

    PubMed

    Da, Yang

    2015-12-18

    The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.

  18. Optimization of aeromedical base locations in New Mexico using a model that considers crash nodes and paths.

    PubMed

    Erdemir, Elif Tokar; Batta, Rajan; Spielman, Seth; Rogerson, Peter A; Blatt, Alan; Flanigan, Marie

    2008-05-01

    In a recent paper, Tokar Erdemir et al. (2008) introduce models for service systems with service requests originating from both nodes and paths. We demonstrate how to apply and extend their approach to an aeromedical base location application, with specific focus on the state of New Mexico (NM). The current aeromedical base locations of NM are selected without considering motor vehicle crash paths. Crash paths are the roads on which crashes occur, where each road segment has a weight signifying relative crash occurrence. We analyze the loss in accident coverage and location error for current aeromedical base locations. We also provide insights on the relevance of considering crash paths when selecting aeromedical base locations. Additionally, we look briefly at some of the tradeoff issues in locating additional trauma centers vs. additional aeromedical bases in the current aeromedical system of NM. Not surprisingly, tradeoff analysis shows that by locating additional aeromedical bases, we always attain the required coverage level with a lower cost than with locating additional trauma centers.

  19. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  20. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  1. Pedigree-based estimation of covariance between dominance deviations and additive genetic effects in closed rabbit lines considering inbreeding and using a computationally simpler equivalent model.

    PubMed

    Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M

    2017-06-01

    Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.

  2. A demonstrative model of a lunar base simulation on a personal computer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  3. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    PubMed Central

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108

  4. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    PubMed

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  5. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Unified Modeling Language (UML) for hospital-based cancer registration processes.

    PubMed

    Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi

    2008-01-01

    Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a standardized system because the hospital-based cancer registration system was constructed with the pre-existing computer system in Osaka University Hospital. Difficulty of utilization of useful information for cancer registration processes was shown to increase the task workload. By using UML, we were able to clarify functions and extract the typical processes for a hospital-based cancer registry. Modeling can provide a basis of process analysis for establishment of efficient hospital-based cancer registration processes in each institute.

  7. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  8. USING ECO-EVOLUTIONARY INDIVIDUAL-BASED MODELS TO INVESTIGATE SPATIALLY-DEPENDENT PROCESSES IN CONSERVATION GENETICS

    EPA Science Inventory

    Eco-evolutionary population simulation models are powerful new forecasting tools for exploring management strategies for climate change and other dynamic disturbance regimes. Additionally, eco-evo individual-based models (IBMs) are useful for investigating theoretical feedbacks ...

  9. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers

    PubMed Central

    2011-01-01

    Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519

  10. Identifying Multiple Levels of Discussion-Based Teaching Strategies for Constructing Scientific Models

    ERIC Educational Resources Information Center

    Williams, Grant; Clement, John

    2015-01-01

    This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…

  11. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  12. Base drag prediction on missile configurations

    NASA Technical Reports Server (NTRS)

    Moore, F. G.; Hymer, T.; Wilcox, F.

    1993-01-01

    New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.

  13. Enhancements to the KATE model-based reasoning system

    NASA Technical Reports Server (NTRS)

    Thomas, Stan J.

    1994-01-01

    KATE (Knowledge-based Autonomous Test Engineer) is a model-based software system developed in the Artificial Intelligence Laboratory at the Kennedy Space Center for monitoring, fault detection, and control of launch vehicles and ground support systems. This report describes two software efforts which enhance the functionality and usability of KATE. The first addition, a flow solver, adds to KATE a tool for modeling the flow of liquid in a pipe system. The second addition adds support for editing KATE knowledge base files to the Emacs editor. The body of this report discusses design and implementation issues having to do with these two tools. It will be useful to anyone maintaining or extending either the flow solver or the editor enhancements.

  14. Baldrige Theory into Practice: A Generic Model

    ERIC Educational Resources Information Center

    Arif, Mohammed

    2007-01-01

    Purpose: The education system globally has moved from a push-based or producer-centric system to a pull-based or customer centric system. Malcolm Baldrige Quality Award (MBQA) model happens to be one of the latest additions to the pull based models. The purpose of this paper is to develop a generic framework for MBQA that can be used by…

  15. Application of nonlinear adaptive motion washout to transport ground-handling simulation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Martin, D. J., Jr.

    1983-01-01

    The application of a nonlinear coordinated adaptive motion washout to the transport ground-handling environment is documented. Additions to both the aircraft math model and the motion washout system are discussed. The additions to the simulated-aircraft math model provided improved modeling fidelity for braking and reverse-thrust application, and the additions to the motion-base washout system allowed transition from the desired flight parameters to the less restrictive ground parameters of the washout.

  16. Enhancement of the Mechanical Properties of Basalt Fiber-Wood-Plastic Composites via Maleic Anhydride Grafted High-Density Polyethylene (MAPE) Addition.

    PubMed

    Chen, Jinxiang; Wang, Yong; Gu, Chenglong; Liu, Jianxun; Liu, Yufu; Li, Min; Lu, Yun

    2013-06-18

    This study investigated the mechanisms, using microscopy and strength testing approaches, by which the addition of maleic anhydride grafted high-density polyethylene (MAPE) enhances the mechanical properties of basalt fiber-wood-plastic composites (BF-WPCs). The maximum values of the specific tensile and flexural strengths are achieved at a MAPE content of 5%-8%. The elongation increases rapidly at first and then continues slowly. The nearly complete integration of the wood fiber with the high-density polyethylene upon MAPE addition to WPC is examined, and two models of interfacial behavior are proposed. We examined the physical significance of both interfacial models and their ability to accurately describe the effects of MAPE addition. The mechanism of formation of the Model I interface and the integrated matrix is outlined based on the chemical reactions that may occur between the various components as a result of hydrogen bond formation or based on the principle of compatibility, resulting from similar polarity. The Model I fracture occurred on the outer surface of the interfacial layer, visually demonstrating the compatibilization effect of MAPE addition.

  17. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  18. A novel model for through-silicon via (TSV) filling process simulation considering three additives and current density effect

    NASA Astrophysics Data System (ADS)

    Wang, Fuliang; Zhao, Zhipeng; Wang, Feng; Wang, Yan; Nie, Nantian

    2017-12-01

    Through-silicon via (TSV) filling by electrochemical deposition is still a challenge for 3D IC packaging, and three-component additive systems (accelerator, suppressor, and leveler) were commonly used in the industry to achieve void-free filling. However, models considering three additive systems and the current density effect have not been fully studied. In this paper, a novel three-component model was developed to study the TSV filling mechanism and process, where the interaction behavior of the three additives (accelerator, suppressor, and leveler) were considered, and the adsorption, desorption, and consumption coefficient of the three additives were changed with the current density. Based on this new model, the three filling types (seam void, ‘V’ shape, and key hole) were simulated under different current density conditions, and the filling results were verified by experiments. The effect of the current density on the copper ion concentration, additives surface coverage, and local current density distribution during the TSV filling process were obtained. Based on the simulation and experimental results, the diffusion-adsorption-desorption-consumption competition behavior between the suppressor, the accelerator, and the leveler were discussed. The filling mechanisms under different current densities were also analyzed.

  19. Versatility of Cooperative Transcriptional Activation: A Thermodynamical Modeling Analysis for Greater-Than-Additive and Less-Than-Additive Effects

    PubMed Central

    Frank, Till D.; Carmody, Aimée M.; Kholodenko, Boris N.

    2012-01-01

    We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive responses when transcription factors and RNA polymerase interact by means of three-body interactions. Overall, we show that versatility of transcriptional activation is brought about by nonlinearities of transcriptional response functions and interactions between transcription factors, RNA polymerase and DNA. PMID:22506020

  20. Statistical virtual eye model based on wavefront aberration

    PubMed Central

    Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie

    2012-01-01

    Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112

  1. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants

    PubMed Central

    Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan

    2017-01-01

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487

  2. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-08-09

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.

  3. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  4. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    PubMed Central

    Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.

    2012-01-01

    Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912

  5. An original traffic additional emission model and numerical simulation on a signalized road

    NASA Astrophysics Data System (ADS)

    Zhu, Wen-Xing; Zhang, Jing-Yu

    2017-02-01

    Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.

  6. A component-based, integrated spatially distributed hydrologic/water quality model: AgroEcoSystem-Watershed (AgES-W) overview and application

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...

  7. Business model for sensor-based fall recognition systems.

    PubMed

    Fachinger, Uwe; Schöpke, Birte

    2014-01-01

    AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.

  8. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  9. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  10. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    NASA Astrophysics Data System (ADS)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  11. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    NASA Astrophysics Data System (ADS)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  12. Spacecraft Dynamics Should be Considered in Kalman Filter Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Yang, Yaguang; Zhou, Zhiqiang

    2016-01-01

    Kalman filter based spacecraft attitude estimation has been used in some high-profile missions and has been widely discussed in literature. While some models in spacecraft attitude estimation include spacecraft dynamics, most do not. To our best knowledge, there is no comparison on which model is a better choice. In this paper, we discuss the reasons why spacecraft dynamics should be considered in the Kalman filter based spacecraft attitude estimation problem. We also propose a reduced quaternion spacecraft dynamics model which admits additive noise. Geometry of the reduced quaternion model and the additive noise are discussed. This treatment is more elegant in mathematics and easier in computation. We use some simulation example to verify our claims.

  13. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  14. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  15. Enhancement of the Mechanical Properties of Basalt Fiber-Wood-Plastic Composites via Maleic Anhydride Grafted High-Density Polyethylene (MAPE) Addition

    PubMed Central

    Chen, Jinxiang; Wang, Yong; Gu, Chenglong; Liu, Jianxun; Liu, Yufu; Li, Min; Lu, Yun

    2013-01-01

    This study investigated the mechanisms, using microscopy and strength testing approaches, by which the addition of maleic anhydride grafted high-density polyethylene (MAPE) enhances the mechanical properties of basalt fiber-wood-plastic composites (BF-WPCs). The maximum values of the specific tensile and flexural strengths areachieved at a MAPE content of 5%–8%. The elongation increases rapidly at first and then continues slowly. The nearly complete integration of the wood fiber with the high-density polyethylene upon MAPE addition to WPC is examined, and two models of interfacial behavior are proposed. We examined the physical significance of both interfacial models and their ability to accurately describe the effects of MAPE addition. The mechanism of formation of the Model I interface and the integrated matrix is outlined based on the chemical reactions that may occur between the various components as a result of hydrogen bond formation or based on the principle of compatibility, resulting from similar polarity. The Model I fracture occurred on the outer surface of the interfacial layer, visually demonstrating the compatibilization effect of MAPE addition. PMID:28809285

  16. Evolution of solidification texture during additive manufacturing.

    PubMed

    Wei, H L; Mazumder, J; DebRoy, T

    2015-11-10

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Therefore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numerical modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components.

  17. Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities for FY 2017, SNF Value-Based Purchasing Program, SNF Quality Reporting Program, and SNF Payment Models Research. Final rule.

    PubMed

    2016-08-05

    This final rule updates the payment rates used under the prospective payment system (PPS) for skilled nursing facilities (SNFs) for fiscal year (FY) 2017. In addition, it specifies a potentially preventable readmission measure for the Skilled Nursing Facility Value-Based Purchasing Program (SNF VBP), and implements requirements for that program, including performance standards, a scoring methodology, and a review and correction process for performance information to be made public, aimed at implementing value-based purchasing for SNFs. Additionally, this final rule includes additional polices and measures in the Skilled Nursing Facility Quality Reporting Program (SNF QRP). This final rule also responds to comments on the SNF Payment Models Research (PMR) project.

  18. A MIXTURE OF SEVEN ANTIANDROGENIC COMPOUNDS ELICITS ADDITIVE EFFECTS ON THE MALE RAT REPRODUCTIVE TRACT THAT CORRESPOND TO MODELED PREDICTIONS

    EPA Science Inventory

    The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...

  19. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  20. Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.

    PubMed

    Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard

    2014-01-01

    To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.

  1. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  2. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua

    2015-07-01

    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  3. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  4. PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR HUMAN EXPOSURES TO METHYL TERTIARY-BUTYL ETHER

    EPA Science Inventory

    Humans can be exposed by inhalation, ingestion, or dermal absorption to methyl tertiary-butyl ether (MTBE), an oxygenated fuel additive, from contaminated water sources. The purpose of this research was to develop a physiologically based pharmacokinetic model describing in human...

  5. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  6. Ecotoxicological assessment of oil-based paint using three-dimensional multi-species bio-testing model: pre- and post-bioremediation analysis.

    PubMed

    Phulpoto, Anwar Hussain; Qazi, Muneer Ahmed; Haq, Ihsan Ul; Phul, Abdul Rahman; Ahmed, Safia; Kanhar, Nisar Ahmed

    2018-06-01

    The present study validates the oil-based paint bioremediation potential of Bacillus subtilis NAP1 for ecotoxicological assessment using a three-dimensional multi-species bio-testing model. The model included bioassays to determine phytotoxic effect, cytotoxic effect, and antimicrobial effect of oil-based paint. Additionally, the antioxidant activity of pre- and post-bioremediation samples was also detected to confirm its detoxification. Although, the pre-bioremediation samples of oil-based paint displayed significant toxicity against all the life forms. However, post-bioremediation, the cytotoxic effect against Artemia salina revealed substantial detoxification of oil-based paint with LD 50 of 121 μl ml -1 (without glucose) and > 400 μl ml -1 (with glucose). Similarly, the reduction in toxicity against Raphanus raphanistrum seeds germination (%FG = 98 to 100%) was also evident of successful detoxification under experimental conditions. Moreover, the toxicity against test bacterial strains and fungal strains was completely removed after bioremediation. In addition, the post-bioremediation samples showed reduced antioxidant activities (% scavenging = 23.5 ± 0.35 and 28.9 ± 2.7) without and with glucose, respectively. Convincingly, the present multi-species bio-testing model in addition to antioxidant studies could be suggested as a validation tool for bioremediation experiments, especially for middle and low-income countries. Graphical abstract ᅟ.

  7. Ground-Based Telescope Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  8. The Wind Forecast Improvement Project (WFIP). A Public/Private Partnership for Improving Short Term Wind Energy Forecasts and Quantifying the Benefits of Utility Operations -- the Northern Study Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, Cathy

    2014-04-30

    This report contains the results from research aimed at improving short-range (0-6 hour) hub-height wind forecasts in the NOAA weather forecast models through additional data assimilation and model physics improvements for use in wind energy forecasting. Additional meteorological observing platforms including wind profilers, sodars, and surface stations were deployed for this study by NOAA and DOE, and additional meteorological data at or near wind turbine hub height were provided by South Dakota State University and WindLogics/NextEra Energy Resources over a large geographical area in the U.S. Northern Plains for assimilation into NOAA research weather forecast models. The resulting improvements inmore » wind energy forecasts based on the research weather forecast models (with the additional data assimilation and model physics improvements) were examined in many different ways and compared with wind energy forecasts based on the current operational weather forecast models to quantify the forecast improvements important to power grid system operators and wind plant owners/operators participating in energy markets. Two operational weather forecast models (OP_RUC, OP_RAP) and two research weather forecast models (ESRL_RAP, HRRR) were used as the base wind forecasts for generating several different wind power forecasts for the NextEra Energy wind plants in the study area. Power forecasts were generated from the wind forecasts in a variety of ways, from very simple to quite sophisticated, as they might be used by a wide range of both general users and commercial wind energy forecast vendors. The error characteristics of each of these types of forecasts were examined and quantified using bulk error statistics for both the local wind plant and the system aggregate forecasts. The wind power forecast accuracy was also evaluated separately for high-impact wind energy ramp events. The overall bulk error statistics calculated over the first six hours of the forecasts at both the individual wind plant and at the system-wide aggregate level over the one year study period showed that the research weather model-based power forecasts (all types) had lower overall error rates than the current operational weather model-based power forecasts, both at the individual wind plant level and at the system aggregate level. The bulk error statistics of the various model-based power forecasts were also calculated by season and model runtime/forecast hour as power system operations are more sensitive to wind energy forecast errors during certain times of year and certain times of day. The results showed that there were significant differences in seasonal forecast errors between the various model-based power forecasts. The results from the analysis of the various wind power forecast errors by model runtime and forecast hour showed that the forecast errors were largest during the times of day that have increased significance to power system operators (the overnight hours and the morning/evening boundary layer transition periods), but the research weather model-based power forecasts showed improvement over the operational weather model-based power forecasts at these times.« less

  9. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE PAGES

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.; ...

    2017-09-22

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  10. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  11. A Meta-Analysis of Video-Modeling Based Interventions for Reduction of Challenging Behaviors for Students with EBD

    ERIC Educational Resources Information Center

    Losinski, Mickey; Wiseman, Nicole; White, Sherry A.; Balluch, Felicity

    2016-01-01

    The current study examined the use of video modeling (VM)-based interventions to reduce the challenging behaviors of students with emotional or behavioral disorders. Each study was evaluated using Council for Exceptional Children's (CEC's) quality indicators for evidence-based practices. In addition, study effects were calculated along the three…

  12. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  13. Physiologically Based Pharmacokinetic Modeling Suggests Limited Drug–Drug Interaction Between Clopidogrel and Dasabuvir

    PubMed Central

    Fu, W; Badri, P; Bow, DAJ; Fischer, V

    2017-01-01

    Dasabuvir, a nonnucleoside NS5B polymerase inhibitor, is a sensitive substrate of cytochrome P450 (CYP) 2C8 with a potential for drug–drug interaction (DDI) with clopidogrel. A physiologically based pharmacokinetic (PBPK) model was developed for dasabuvir to evaluate the DDI potential with clopidogrel, the acyl‐β‐D glucuronide metabolite of which has been reported as a strong mechanism‐based inhibitor of CYP2C8 based on an interaction with repaglinide. In addition, the PBPK model for clopidogrel and its metabolite were updated with additional in vitro data. Sensitivity analyses using these PBPK models suggested that CYP2C8 inhibition by clopidogrel acyl‐β‐D glucuronide may not be as potent as previously suggested. The dasabuvir and updated clopidogrel PBPK models predict a moderate increase of 1.5–1.9‐fold for Cmax and 1.9–2.8‐fold for AUC of dasabuvir when coadministered with clopidogrel. While the PBPK results suggest there is a potential for DDI between dasabuvir and clopidogrel, the magnitude is not expected to be clinically relevant. PMID:28411400

  14. Evolution of solidification texture during additive manufacturing

    PubMed Central

    Wei, H. L.; Mazumder, J.; DebRoy, T.

    2015-01-01

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Therefore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numerical modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components. PMID:26553246

  15. Evolution of solidification texture during additive manufacturing

    DOE PAGES

    Wei, H. L.; Mazumder, J.; DebRoy, T.

    2015-11-10

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Furthermore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numericalmore » modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components.« less

  16. TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases

    DTIC Science & Technology

    1990-09-01

    IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version

  17. Models for Delivering School-Based Dental Care.

    ERIC Educational Resources Information Center

    Albert, David A.; McManus, Joseph M.; Mitchell, Dennis A.

    2005-01-01

    School-based health centers (SBHCs) often are located in high-need schools and communities. Dental service is frequently an addition to existing comprehensive services, functioning in a variety of models, configurations, and locations. SBHCs are indicated when parents have limited financial resources or inadequate health insurance, limiting…

  18. Specific heat capacity of molten salt-based alumina nanofluid.

    PubMed

    Lu, Ming-Chang; Huang, Chien-Hsun

    2013-06-21

    There is no consensus on the effect of nanoparticle (NP) addition on the specific heat capacity (SHC) of fluids. In addition, the predictions from the existing model have a large discrepancy from the measured SHCs in nanofluids. We show that the SHC of the molten salt-based alumina nanofluid decreases with reducing particle size and increasing particle concentration. The NP size-dependent SHC is resulted from an augmentation of the nanolayer effect as particle size reduces. A model considering the nanolayer effect which supports the experimental results was proposed.

  19. Specific heat capacity of molten salt-based alumina nanofluid

    PubMed Central

    2013-01-01

    There is no consensus on the effect of nanoparticle (NP) addition on the specific heat capacity (SHC) of fluids. In addition, the predictions from the existing model have a large discrepancy from the measured SHCs in nanofluids. We show that the SHC of the molten salt-based alumina nanofluid decreases with reducing particle size and increasing particle concentration. The NP size-dependent SHC is resulted from an augmentation of the nanolayer effect as particle size reduces. A model considering the nanolayer effect which supports the experimental results was proposed. PMID:23800321

  20. Large eddy simulations of time-dependent and buoyancy-driven channel flows

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1993-01-01

    The primary goal of this work has been to assess the performance of the dynamic SGS model in the large eddy simulation (LES) of channel flows in a variety of situations, viz., in temporal development of channel flow turned by a transverse pressure gradient and especially in buoyancy-driven turbulent flows such as Rayleigh-Benard and internally heated channel convection. For buoyancy-driven flows, there are additional buoyant terms that are possible in the base models, and one objective has been to determine if the dynamic SGS model results are sensitive to such terms. The ultimate goal is to determine the minimal base model needed in the dynamic SGS model to provide accurate results in flows with more complicated physical features. In addition, a program of direct numerical simulation (DNS) of fully compressible channel convection has been undertaken to determine stratification and compressibility effects. These simulations are intended to provide a comparative base for performing the LES of compressible (or highly stratified, pseudo-compressible) convection at high Reynolds number in the future.

  1. Analysis of the Best-Fit Sky Model Produced Through Redundant Calibration of Interferometers

    NASA Astrophysics Data System (ADS)

    Storer, Dara; Pober, Jonathan

    2018-01-01

    21 cm cosmology provides unique insights into the formation of stars and galaxies in the early universe, and particularly the Epoch of Reionization. Detection of the 21 cm line is challenging because it is generally 4-5 magnitudes weaker than the emission from foreground sources, and therefore the instruments used for detection must be carefully designed and calibrated. 21 cm cosmology is primarily conducted using interferometers, which are difficult to calibrate because of their complex structure. Here I explore the relationship between sky-based calibration, which relies on an accurate and comprehensive sky model, and redundancy-based calibration, which makes use of redundancies in the orientation of the interferometer's dishes. In addition to producing calibration parameters, redundant calibration also produces a best fit model of the sky. In this work I examine that sky model and explore the possibility of using that best fit model as an additional input to improve on sky-based calibration.

  2. A critical issue in model-based inference for studying trait-based community assembly and a solution.

    PubMed

    Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane

    2017-01-01

    Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.

  3. Reevaluation of a walleye (Sander vitreus) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; Wang, Chunfang

    2013-01-01

    Walleye (Sander vitreus) is an important sport fish throughout much of North America, and walleye populations support valuable commercial fisheries in certain lakes as well. Using a corrected algorithm for balancing the energy budget, we reevaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks each day during a 126-day experiment. Feeding rates ranged from 1.4 to 1.7 % of walleye body weight per day. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with observed monthly consumption, we concluded that the bioenergetics model estimated food consumption by walleye without any significant bias. Similarly, based on a statistical comparison of bioenergetics model predictions of weight at the end of the monthly test period with observed weight, we concluded that the bioenergetics model predicted walleye growth without any detectable bias. In addition, the bioenergetics model predictions of cumulative consumption over the 126-day experiment differed fromobserved cumulative consumption by less than 10 %. Although additional laboratory and field testing will be needed to fully evaluate model performance, based on our laboratory results, the Wisconsin bioenergetics model for walleye appears to be providing unbiased predictions of food consumption.

  4. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  5. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  6. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template.

    PubMed

    Jewkes, Rachel; Burton, Hanna E; Espino, Daniel M

    2018-02-02

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  7. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    PubMed Central

    Jewkes, Rachel; Burton, Hanna E.; Espino, Daniel M.

    2018-01-01

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries. PMID:29393899

  8. 42 CFR § 512.307 - Subsequent calculations.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.307... the initial NPRA, using claims data and non-claims-based payment data available at that time, to account for final claims run-out, final changes in non-claims-based payment data, and any additional...

  9. Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling

    ERIC Educational Resources Information Center

    Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.

    2018-01-01

    The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…

  10. Colors of attraction: Modeling insect flight to light behavior.

    PubMed

    Donners, Maurice; van Grunsven, Roy H A; Groenendijk, Dick; van Langevelde, Frank; Bikker, Jan Willem; Longcore, Travis; Veenendaal, Elmar

    2018-06-26

    Light sources attract nocturnal flying insects, but some lamps attract more insects than others. The relation between the properties of a light source and the number of attracted insects is, however, poorly understood. We developed a model to quantify the attractiveness of light sources based on the spectral output. This model is fitted using data from field experiments that compare a large number of different light sources. We validated this model using two additional datasets, one for all insects and one excluding the numerous Diptera. Our model facilitates the development and application of light sources that attract fewer insects without the need for extensive field tests and it can be used to correct for spectral composition when formulating hypotheses on the ecological impact of artificial light. In addition, we present a tool allowing the conversion of the spectral output of light sources to their relative insect attraction based on this model. © 2018 Wiley Periodicals, Inc.

  11. All-Atom Polarizable Force Field for DNA Based on the Classical Drude Oscillator Model

    PubMed Central

    Savelyev, Alexey; MacKerell, Alexander D.

    2014-01-01

    Presented is a first generation atomistic force field for DNA in which electronic polarization is modeled based on the classical Drude oscillator formalism. The DNA model is based on parameters for small molecules representative of nucleic acids, including alkanes, ethers, dimethylphosphate, and the nucleic acid bases and empirical adjustment of key dihedral parameters associated with the phosphodiester backbone, glycosidic linkages and sugar moiety of DNA. Our optimization strategy is based on achieving a compromise between satisfying the properties of the underlying model compounds in the gas phase targeting QM data and reproducing a number of experimental properties of DNA duplexes in the condensed phase. The resulting Drude force field yields stable DNA duplexes on the 100 ns time scale and satisfactorily reproduces (1) the equilibrium between A and B forms of DNA and (2) transitions between the BI and BII sub-states of B form DNA. Consistency with the gas phase QM data for the model compounds is significantly better for the Drude model as compared to the CHARMM36 additive force field, which is suggested to be due to the improved response of the model to changes in the environment associated with the explicit inclusion of polarizability. Analysis of dipole moments associated with the nucleic acid bases shows the Drude model to have significantly larger values than those present in CHARMM36, with the dipoles of individual bases undergoing significant variations during the MD simulations. Additionally, the dipole moment of water was observed to be perturbed in the grooves of DNA. PMID:24752978

  12. Tigers on trails: occupancy modeling for cluster sampling.

    PubMed

    Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U

    2010-07-01

    Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.

  13. Genomic selection in a commercial winter wheat population.

    PubMed

    He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong

    2016-03-01

    Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.

  14. Integrating Nonadditive Genomic Relationship Matrices into the Study of Genetic Architecture of Complex Traits.

    PubMed

    Nazarian, Alireza; Gezan, Salvador A

    2016-03-01

    The study of genetic architecture of complex traits has been dramatically influenced by implementing genome-wide analytical approaches during recent years. Of particular interest are genomic prediction strategies which make use of genomic information for predicting phenotypic responses instead of detecting trait-associated loci. In this work, we present the results of a simulation study to improve our understanding of the statistical properties of estimation of genetic variance components of complex traits, and of additive, dominance, and genetic effects through best linear unbiased prediction methodology. Simulated dense marker information was used to construct genomic additive and dominance matrices, and multiple alternative pedigree- and marker-based models were compared to determine if including a dominance term into the analysis may improve the genetic analysis of complex traits. Our results showed that a model containing a pedigree- or marker-based additive relationship matrix along with a pedigree-based dominance matrix provided the best partitioning of genetic variance into its components, especially when some degree of true dominance effects was expected to exist. Also, we noted that the use of a marker-based additive relationship matrix along with a pedigree-based dominance matrix had the best performance in terms of accuracy of correlations between true and estimated additive, dominance, and genetic effects. © The American Genetic Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Investigating Ground Swarm Robotics Using Agent Based Simulation

    DTIC Science & Technology

    2006-12-01

    Incorporation of virtual pheromones as a shared memory map is modeled as an additional capability that is found to enhance the robustness and reliability of the...virtual pheromones as a shared memory map is modeled as an additional capability that is found to enhance the robustness and reliability of the swarm... PHEROMONES .......................................... 42 1. Repel Friends under Inorganic SA.................................................. 45 2. Max

  16. On the additive and dominant variance and covariance of individuals within the genomic selection scope.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Legarra, Andres

    2013-12-01

    Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.

  17. Modeling of circulating fluised beds for post-combustion carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, A.; Shadle, L.; Miller, D.

    2011-01-01

    A compartment based model for a circulating fluidized bed reactor has been developed based on experimental observations of riser hydrodynamics. The model uses a cluster based approach to describe the two-phase behavior of circulating fluidized beds. Fundamental mass balance equations have been derived to describe the movement of both gas and solids though the system. Additional work is being performed to develop the correlations required to describe the hydrodynamics of the system. Initial testing of the model with experimental data shows promising results and highlights the importance of including end effects within the model.

  18. Haptics-based dynamic implicit solid modeling.

    PubMed

    Hua, Jing; Qin, Hong

    2004-01-01

    This paper systematically presents a novel, interactive solid modeling framework, Haptics-based Dynamic Implicit Solid Modeling, which is founded upon volumetric implicit functions and powerful physics-based modeling. In particular, we augment our modeling framework with a haptic mechanism in order to take advantage of additional realism associated with a 3D haptic interface. Our dynamic implicit solids are semi-algebraic sets of volumetric implicit functions and are governed by the principles of dynamics, hence responding to sculpting forces in a natural and predictable manner. In order to directly manipulate existing volumetric data sets as well as point clouds, we develop a hierarchical fitting algorithm to reconstruct and represent discrete data sets using our continuous implicit functions, which permit users to further design and edit those existing 3D models in real-time using a large variety of haptic and geometric toolkits, and visualize their interactive deformation at arbitrary resolution. The additional geometric and physical constraints afford more sophisticated control of the dynamic implicit solids. The versatility of our dynamic implicit modeling enables the user to easily modify both the geometry and the topology of modeled objects, while the inherent physical properties can offer an intuitive haptic interface for direct manipulation with force feedback.

  19. Predicting Risk of Type 2 Diabetes Mellitus with Genetic Risk Models on the Basis of Established Genome-wide Association Markers: A Systematic Review

    PubMed Central

    Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin

    2013-01-01

    This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910

  20. Models in Science Education: Applications of Models in Learning and Teaching Science

    ERIC Educational Resources Information Center

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  1. Genomic Model with Correlation Between Additive and Dominance Effects.

    PubMed

    Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres

    2018-05-09

    Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant deviations performed similarly for prediction. Copyright © 2018, Genetics.

  2. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  3. Reciprocal Peer Assessment as a Learning Tool for Secondary School Students in Modeling-Based Learning

    ERIC Educational Resources Information Center

    Tsivitanidou, Olia E.; Constantinou, Costas P.; Labudde, Peter; Rönnebeck, Silke; Ropohl, Mathias

    2018-01-01

    The aim of this study was to investigate how reciprocal peer assessment in modeling-based learning can serve as a learning tool for secondary school learners in a physics course. The participants were 22 upper secondary school students from a gymnasium in Switzerland. They were asked to model additive and subtractive color mixing in groups of two,…

  4. A new statistical approach to climate change detection and attribution

    NASA Astrophysics Data System (ADS)

    Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe

    2017-01-01

    We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the " models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).

  5. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  6. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  7. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  8. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  9. The Contribution of Emotional Intelligence to Decisional Styles among Italian High School Students

    ERIC Educational Resources Information Center

    Di Fabio, Annamaria; Kenny, Maureen E.

    2012-01-01

    This study examined the relationship between emotional intelligence (EI) and styles of decision making. Two hundred and six Italian high school students completed two measures of EI, the Bar-On EI Inventory, based on a mixed model of EI, and the Mayer Salovey Caruso EI Test, based on an ability-based model of EI, in addition to the General…

  10. Fiber Breakage Model for Carbon Composite Stress Rupture Phenomenon: Theoretical Development and Applications

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Phoenix, S. Leigh; Grimes-Ledesma, Lorie

    2010-01-01

    Stress rupture failure of Carbon Composite Overwrapped Pressure Vessels (COPVs) is of serious concern to Science Mission and Constellation programs since there are a number of COPVs on board space vehicles with stored gases under high pressure for long durations of time. It has become customary to establish the reliability of these vessels using the so called classic models. The classical models are based on Weibull statistics fitted to observed stress rupture data. These stochastic models cannot account for any additional damage due to the complex pressure-time histories characteristic of COPVs being supplied for NASA missions. In particular, it is suspected that the effects of proof test could significantly reduce the stress rupture lifetime of COPVs. The focus of this paper is to present an analytical appraisal of a model that incorporates damage due to proof test. The model examined in the current paper is based on physical mechanisms such as micromechanics based load sharing concepts coupled with creep rupture and Weibull statistics. For example, the classic model cannot accommodate for damage due to proof testing which every flight vessel undergoes. The paper compares current model to the classic model with a number of examples. In addition, several applications of the model to current ISS and Constellation program issues are also examined.

  11. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    PubMed

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  12. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology

    PubMed Central

    Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.

    2014-01-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914

  13. Model-assisted estimation of forest resources with generalized additive models

    Treesearch

    Jean D. Opsomer; F. Jay Breidt; Gretchen G. Moisen; Goran Kauermann

    2007-01-01

    Multiphase surveys are often conducted in forest inventories, with the goal of estimating forested area and tree characteristics over large regions. This article describes how design-based estimation of such quantities, based on information gathered during ground visits of sampled plots, can be made more precise by incorporating auxiliary information available from...

  14. Force Field for Peptides and Proteins based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E.M.; Huang, Jing; Shim, Jihyun; Luo, Yun; Li, Hui; Roux, Benoît; MacKerell, Alexander D.

    2013-01-01

    Presented is a polarizable force field based on a classical Drude oscillator framework, currently implemented in the programs CHARMM and NAMD, for modeling and molecular dynamics (MD) simulation studies of peptides and proteins. Building upon parameters for model compounds representative of the functional groups in proteins, the development of the force field focused on the optimization of the parameters for the polypeptide backbone and the connectivity between the backbone and side chains. Optimization of the backbone electrostatic parameters targeted quantum mechanical conformational energies, interactions with water, molecular dipole moments and polarizabilities and experimental condensed phase data for short polypeptides such as (Ala)5. Additional optimization of the backbone φ, ψ conformational preferences included adjustments of the tabulated two-dimensional spline function through the CMAP term. Validation of the model included simulations of a collection of peptides and proteins. This 1st generation polarizable model is shown to maintain the folded state of the studied systems on the 100 ns timescale in explicit solvent MD simulations. The Drude model typically yields larger RMS differences as compared to the additive CHARMM36 force field (C36) and shows additional flexibility as compared to the additive model. Comparison with NMR chemical shift data shows a small degradation of the polarizable model with respect to the additive, though the level of agreement may be considered satisfactory, while for residues shown to have significantly underestimated S2 order parameters in the additive model, improvements are calculated with the polarizable model. Analysis of dipole moments associated with the peptide backbone and tryptophan side chains show the Drude model to have significantly larger values than those present in C36, with the dipole moments of the peptide backbone enhanced to a greater extent in sheets versus helices and the dipoles of individual moieties observed to undergo significant variations during the MD simulations. Although there are still some limitations, the presented model, termed Drude-2013, is anticipated to yield a molecular picture of peptide and protein structure and function that will be of increased physical validity and internal consistency in a computationally accessible fashion. PMID:24459460

  15. Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Li, Jinquan

    2018-01-01

    The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.

  16. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  17. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  18. Ionic micelles and aromatic additives: a closer look at the molecular packing parameter.

    PubMed

    Lutz-Bueno, Viviane; Isabettini, Stéphane; Walker, Franziska; Kuster, Simon; Liebi, Marianne; Fischer, Peter

    2017-08-16

    Wormlike micellar aggregates formed from the mixture of ionic surfactants with aromatic additives result in solutions with impressive viscoelastic properties. These properties are of high interest for numerous industrial applications and are often used as model systems for soft matter physics. However, robust and simple models for tailoring the viscoelastic response of the solution based on the molecular structure of the employed additive are required to fully exploit the potential of these systems. We address this shortcoming with a modified packing parameter based model, considering the additive-surfactant pair. The role of charge neutralization on anisotropic micellar growth was investigated with derivatives of sodium salicylate. The impact of the additives on the morphology of the micellar aggregates is explained from the molecular level to the macroscopic viscoelasticity. Changes in the micelle's volume, headgroup area and additive structure are explored to redefine the packing parameter. Uncharged additives penetrated deeper into the hydrophobic region of the micelle, whilst charged additives remained trapped in the polar region, as revealed by a combination of 1 H-NMR, SAXS and rheological measurements. A deeper penetration of the additives densified the hydrophobic core of the micelle and induced anisotropic growth by increasing the effective volume of the additive-surfactant pair. This phenomenon largely influenced the viscosity of the solutions. Partially penetrating additives reduced the electrostatic repulsions between surfactant headgroups and neighboring micelles. The resulting increased network density governed the elasticity of the solutions. Considering a packing parameter composed of the additive-surfactant pair proved to be a facile means of engineering the viscoelastic response of surfactant solutions. The self-assembly of the wormlike micellar aggregates could be tailored to desired morphologies resulting in a specific and predictable rheological response.

  19. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  20. Higher Education: New Models, New Rules

    ERIC Educational Resources Information Center

    Soares, Louis; Eaton, Judith S.; Smith, Burck

    2013-01-01

    The Internet enables new models. In the commercial world, for example, we have eBay, Amazon.com, and Netflix. These new models operate with a different set of rules than do traditional models. New models are emerging in higher education as well--for example, competency-based programs. In addition, courses that are being provided from outside the…

  1. Usage Intention Framework Model: A Fuzzy Logic Interpretation of the Classical Utaut Model

    ERIC Educational Resources Information Center

    Sandaire, Johnny

    2009-01-01

    A fuzzy conjoint analysis (FCA: Turksen, 1992) model for enhancing management decision in the technology adoption domain was implemented as an extension to the UTAUT model (Venkatesh, Morris, Davis, & Davis, 2003). Additionally, a UTAUT-based Usage Intention Framework Model (UIFM) introduced a closed-loop feedback system. The empirical evidence…

  2. Graphic comparison of reserve-growth models for conventional oil and accumulation

    USGS Publications Warehouse

    Klett, T.R.

    2003-01-01

    The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older and newer data. The reserve-growth model used in the 1995 USGS National Assessment and the model currently used in the NOGA project provide forecast functions that yield similar estimates of potential additions to reserves. Both models are based on the Oil and Gas Integrated Field File from the Energy Information Administration (EIA), but different vintages of data (from 1977 through 1991 and 1977 through 1996, respectively). The model based on newer data can be used in place of the previous model, providing similar estimates of potential additions to reserves. Fore-cast functions for oil fields vary little from those for gas fields in these models; therefore, a single function may be used for both oil and gas fields, like that used in the USGS World Petroleum Assessment 2000. Forecast functions based on the field-level reserve growth model derived from the NRG Associates databases (from 1982 through 1998) differ from those derived from EIA databases (from 1977 through 1996). However, the difference may not be enough to preclude the use of the forecast functions derived from NRG data in place of the forecast functions derived from EIA data. Should the model derived from NRG data be used, separate forecast functions for oil fields and gas fields must be employed. The forecast function for oil fields from the model derived from NRG data varies significantly from that for gas fields, and a single function for both oil and gas fields may not be appropriate.

  3. Improving Conceptual Understanding and Representation Skills Through Excel-Based Modeling

    NASA Astrophysics Data System (ADS)

    Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.

    2018-02-01

    The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental design study to test the effectiveness of a model-based curriculum focused on the concepts of natural selection and population ecology that makes use of Excel modeling tools (Modeling Instruction in Biology with Excel, MBI-E). The curriculum revolves around the bio-engineering practice of controlling an invasive species. The study takes place in the Midwest within ten high schools teaching a regular-level introductory biology class. A post-test was designed that targeted a number of common misconceptions in both concept areas as well as representational usage. The results of a post-test demonstrate that the MBI-E students significantly outperformed the traditional classes in both natural selection and population ecology concepts, thus overcoming a number of misconceptions. In addition, implementing students made use of more multiple representations as well as demonstrating greater fascination for science.

  4. ANNIE - INTERACTIVE PROCESSING OF DATA BASES FOR HYDROLOGIC MODELS.

    USGS Publications Warehouse

    Lumb, Alan M.; Kittle, John L.

    1985-01-01

    ANNIE is a data storage and retrieval system that was developed to reduce the time and effort required to calibrate, verify, and apply watershed models that continuously simulate water quantity and quality. Watershed models have three categories of input: parameters to describe segments of a drainage area, linkage of the segments, and time-series data. Additional goals for ANNIE include the development of software that is easily implemented on minicomputers and some microcomputers and software that has no special requirements for interactive display terminals. Another goal is for the user interaction to be based on the experience of the user so that ANNIE is helpful to the inexperienced user and yet efficient and brief for the experienced user. Finally, the code should be designed so that additional hydrologic models can easily be added to ANNIE.

  5. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  6. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  7. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  8. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  9. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    PubMed Central

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  10. Applying Additive Hazards Models for Analyzing Survival in Patients with Colorectal Cancer in Fars Province, Southern Iran

    PubMed

    Madadizadeh, Farzan; Ghanbarnejad, Amin; Ghavami, Vahid; Zare Bandamiri, Mohammad; Mohammadianpanah, Mohammad

    2017-04-01

    Introduction: Colorectal cancer (CRC) is a commonly fatal cancer that ranks as third worldwide and third and the fifth in Iranian women and men, respectively. There are several methods for analyzing time to event data. Additive hazards regression models take priority over the popular Cox proportional hazards model if the absolute hazard (risk) change instead of hazard ratio is of primary concern, or a proportionality assumption is not made. Methods: This study used data gathered from medical records of 561 colorectal cancer patients who were admitted to Namazi Hospital, Shiraz, Iran, during 2005 to 2010 and followed until December 2015. The nonparametric Aalen’s additive hazards model, semiparametric Lin and Ying’s additive hazards model and Cox proportional hazards model were applied for data analysis. The proportionality assumption for the Cox model was evaluated with a test based on the Schoenfeld residuals and for test goodness of fit in additive models, Cox-Snell residual plots were used. Analyses were performed with SAS 9.2 and R3.2 software. Results: The median follow-up time was 49 months. The five-year survival rate and the mean survival time after cancer diagnosis were 59.6% and 68.1±1.4 months, respectively. Multivariate analyses using Lin and Ying’s additive model and the Cox proportional model indicated that the age of diagnosis, site of tumor, stage, and proportion of positive lymph nodes, lymphovascular invasion and type of treatment were factors affecting survival of the CRC patients. Conclusion: Additive models are suitable alternatives to the Cox proportionality model if there is interest in evaluation of absolute hazard change, or no proportionality assumption is made. Creative Commons Attribution License

  11. An overview of TOUGH-based geomechanics models

    DOE PAGES

    Rutqvist, Jonny

    2016-09-22

    After the initial development of the first TOUGH-based geomechanics model 15 years ago based on linking TOUGH2 multiphase flow simulator to the FLAC3D geomechanics simulator, at least 15 additional TOUGH-based geomechanics models have appeared in the literature. This development has been fueled by a growing demand and interest for modeling coupled multiphase flow and geomechanical processes related to a number of geoengineering applications, such as in geologic CO 2 sequestration, enhanced geothermal systems, unconventional hydrocarbon production, and most recently, related to reservoir stimulation and injection-induced seismicity. This paper provides a short overview of these TOUGH-based geomechanics models, focusing on somemore » of the most frequently applied to a diverse set of problems associated with geomechanics and its couplings to hydraulic, thermal and chemical processes.« less

  12. The Effectiveness of Learning Model of Basic Education with Character-Based at Universitas Muslim Indonesia

    ERIC Educational Resources Information Center

    Rosmiati, Rosmiati; Mahmud, Alimuddin; Talib, Syamsul B.

    2016-01-01

    The purpose of this study was to determine the effectiveness of the basic education learning model with character-based through learning in the Universitas Muslim Indonesia. In addition, the research specifically examines the character of discipline, curiosity and responsibility. The specific target is to produce a basic education learning model…

  13. Mathematical modeling of a radio-frequency path for IEEE 802.11ah based wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Tyshchenko, Igor; Cherepanov, Alexander; Dmitrii, Vakhnin; Popova, Mariia

    2017-09-01

    This article discusses the process of creating the mathematical model of a radio-frequency path for an IEEE 802.11ah based wireless sensor networks using M atLab Simulink CAD tools. In addition, it describes occurring perturbing effects and determining the presence of a useful signal in the received mixture.

  14. Determinants of perceived sleep quality in normal sleepers.

    PubMed

    Goelema, M S; Regis, M; Haakma, R; van den Heuvel, E R; Markopoulos, P; Overeem, S

    2017-09-20

    This study aimed to establish the determinants of perceived sleep quality over a longer period of time, taking into account the separate contributions of actigraphy-based sleep measures and self-reported sleep indices. Fifty participants (52 ± 6.6 years; 27 females) completed two consecutive weeks of home monitoring, during which they kept a sleep-wake diary while their sleep was monitored using a wrist-worn actigraph. The diary included questions on perceived sleep quality, sleep-wake information, and additional factors such as well-being and stress. The data were analyzed using multilevel models to compare a model that included only actigraphy-based sleep measures (model Acti) to a model that included only self-reported sleep measures to explain perceived sleep quality (model Self). In addition, a model based on the self-reported sleep measures and extended with nonsleep-related factors was analyzed to find the most significant determinants of perceived sleep quality (model Extended). Self-reported sleep measures (model Self) explained 61% of the total variance, while actigraphy-based sleep measures (model Acti) only accounted for 41% of the perceived sleep quality. The main predictors in the self-reported model were number of awakenings during the night, sleep onset latency, and wake time after sleep onset. In the extended model, the number of awakenings during the night and total sleep time of the previous night were the strongest determinants of perceived sleep quality, with 64% of the variance explained. In our cohort, perceived sleep quality was mainly determined by self-reported sleep measures and less by actigraphy-based sleep indices. These data further stress the importance of taking multiple nights into account when trying to understand perceived sleep quality.

  15. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  16. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  17. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  18. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Life prediction modeling based on cyclic damage accumulation

    NASA Technical Reports Server (NTRS)

    Nelson, Richard S.

    1988-01-01

    A high temperature, low cycle fatigue life prediction method was developed. This method, Cyclic Damage Accumulation (CDA), was developed for use in predicting the crack initiation lifetime of gas turbine engine materials, where initiation was defined as a 0.030 inch surface length crack. A principal engineering feature of the CDA method is the minimum data base required for implementation. Model constants can be evaluated through a few simple specimen tests such as monotonic loading and rapic cycle fatigue. The method was expanded to account for the effects on creep-fatigue life of complex loadings such as thermomechanical fatigue, hold periods, waveshapes, mean stresses, multiaxiality, cumulative damage, coatings, and environmental attack. A significant data base was generated on the behavior of the cast nickel-base superalloy B1900+Hf, including hundreds of specimen tests under such loading conditions. This information is being used to refine and extend the CDA life prediction model, which is now nearing completion. The model is also being verified using additional specimen tests on wrought INCO 718, and the final version of the model is expected to be adaptable to most any high-temperature alloy. The model is currently available in the form of equations and related constants. A proposed contract addition will make the model available in the near future in the form of a computer code to potential users.

  20. Classification techniques on computerized systems to predict and/or to detect Apnea: A systematic review.

    PubMed

    Pombo, Nuno; Garcia, Nuno; Bousson, Kouamana

    2017-03-01

    Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios. This study aims to systematically review the literature on systems for the detection and/or prediction of apnea events using a classification model. Forty-five included studies revealed a combination of classification techniques for the diagnosis of apnea, such as threshold-based (14.75%) and machine learning (ML) models (85.25%). In addition, the ML models, were clustered in a mind map, include neural networks (44.26%), regression (4.91%), instance-based (11.47%), Bayesian algorithms (1.63%), reinforcement learning (4.91%), dimensionality reduction (8.19%), ensemble learning (6.55%), and decision trees (3.27%). A classification model should provide an auto-adaptive and no external-human action dependency. In addition, the accuracy of the classification models is related with the effective features selection. New high-quality studies based on randomized controlled trials and validation of models using a large and multiple sample of data are recommended. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  1. Econometrics of exhaustible resource supply: a theory and an application. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epple, D.; Hansen, L.P.

    1981-12-01

    An econometric model of US oil and natural gas discoveries is developed in this study. The econometric model is explicitly derived as the solution to the problem of maximizing the expected discounted after tax present value of revenues net of exploration, development, and production costs. The model contains equations representing producers' formation of price expectations and separate equations giving producers' optimal exploration decisions contingent on expected prices. A procedure is developed for imposing resource base constraints (e.g., ultimate recovery estimates based on geological analysis) when estimating the econometric model. The model is estimated using aggregate post-war data for the Unitedmore » States. Production from a given addition to proved reserves is assumed to follow a negative exponential path, and additions of proved reserves from a given discovery are assumed to follow a negative exponential path. Annual discoveries of oil and natural gas are estimated as latent variables. These latent variables are the endogenous variables in the econometric model of oil and natural gas discoveries. The model is estimated without resource base constraints. The model is also estimated imposing the mean oil and natural gas ultimate recovery estimates of the US Geological Survey. Simulations through the year 2020 are reported for various future price regimes.« less

  2. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    NASA Technical Reports Server (NTRS)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of droplets, along with width and height of extruded lines cured at varied temperatures. Experimental results carried out on a goniometric platform and a nozzle-based 3D printer showed agreement with the numerical simulations. Finally, this presentation will show how the models are adaptable to the planning of tool paths and designs in additive manufacturing.

  3. Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame

    NASA Astrophysics Data System (ADS)

    Lee, Myoungkyu; Oliver, Todd; Moser, Robert

    2017-11-01

    Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.

  4. Reconstruction and in silico analysis of an Actinoplanes sp. SE50/110 genome-scale metabolic model for acarbose production

    PubMed Central

    Wang, Yali; Xu, Nan; Ye, Chao; Liu, Liming; Shi, Zhongping; Wu, Jing

    2015-01-01

    Actinoplanes sp. SE50/110 produces the α-glucosidase inhibitor acarbose, which is used to treat type 2 diabetes mellitus. To obtain a comprehensive understanding of its cellular metabolism, a genome-scale metabolic model of strain SE50/110, iYLW1028, was reconstructed on the bases of the genome annotation, biochemical databases, and extensive literature mining. Model iYLW1028 comprises 1028 genes, 1128 metabolites, and 1219 reactions. One hundred and twenty-two and eighty one genes were essential for cell growth on acarbose synthesis and sucrose media, respectively, and the acarbose biosynthetic pathway in SE50/110 was expounded completely. Based on model predictions, the addition of arginine and histidine to the media increased acarbose production by 78 and 59%, respectively. Additionally, dissolved oxygen has a great effect on acarbose production based on model predictions. Furthermore, genes to be overexpressed for the overproduction of acarbose were identified, and the deletion of treY eliminated the formation of by-product component C. Model iYLW1028 is a useful platform for optimizing and systems metabolic engineering for acarbose production in Actinoplanes sp. SE50/110. PMID:26161077

  5. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  6. Robust model predictive control for constrained continuous-time nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  7. Dynamical response of multi-patch, flux-based models to the input of infected people: Epidemic response to initiated events

    NASA Astrophysics Data System (ADS)

    Rho, Young-Ah; Liebovitch, Larry S.; Schwartz, Ira B.

    2008-07-01

    The time course of an epidemic can be modeled using the differential equations that describe the spread of disease and by dividing people into “patches” of different sizes with the migration of people between these patches. We used these multi-patch, flux-based models to determine how the time course of infected and susceptible populations depends on the disease parameters, the geometry of the migrations between the patches, and the addition of infected people into a patch. We found that there are significantly longer lived transients and additional “ancillary” epidemics when the reproductive rate R is closer to 1, as would be typical of SARS (Severe Acute Respiratory Syndrome) and bird flu, than when R is closer to 10, as would be typical of measles. In addition we show, both analytical and numerical, how the time delay between the injection of infected people into a patch and the corresponding initial epidemic that it produces depends on R.

  8. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  9. Java Web Simulation (JWS); a web based database of kinetic models.

    PubMed

    Snoep, J L; Olivier, B G

    2002-01-01

    Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.

  10. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    NASA Astrophysics Data System (ADS)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  11. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  12. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    PubMed

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; van Eijkeren, Jan Cornelis Henri

    2006-06-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from 10 min to 8 h. In the AEGL setting for methylene chloride, specific additional topics had to be addressed. This included a change of relevant toxicity endpoint within the 10-min to 8-h exposure time range from central nervous system depression caused by the parent compound to formation of carboxyhemoglobin (COHb) via biotransformation to carbon monoxide. Additionally, the biotransformation of methylene chloride includes both a saturable step as well as genetic polymorphism of the glutathione transferase involved. Physiologically based pharmacokinetic modeling was considered to be the appropriate tool to address all these topics in an adequate way. Two available PBPK models were combined and extended with additional algorithms for the estimation of the maximum COHb levels. The model was validated and verified with data obtained from volunteer studies. It was concluded that all the mentioned topics could be adequately accounted for by the PBPK model. The AEGL values as calculated with the model were substantiated by experimental data with volunteers and are concluded to be practically applicable.

  13. An integrative top-down and bottom-up qualitative model construction framework for exploration of biochemical systems.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    Computational modelling of biochemical systems based on top-down and bottom-up approaches has been well studied over the last decade. In this research, after illustrating how to generate atomic components by a set of given reactants and two user pre-defined component patterns, we propose an integrative top-down and bottom-up modelling approach for stepwise qualitative exploration of interactions among reactants in biochemical systems. Evolution strategy is applied to the top-down modelling approach to compose models, and simulated annealing is employed in the bottom-up modelling approach to explore potential interactions based on models constructed from the top-down modelling process. Both the top-down and bottom-up approaches support stepwise modular addition or subtraction for the model evolution. Experimental results indicate that our modelling approach is feasible to learn the relationships among biochemical reactants qualitatively. In addition, hidden reactants of the target biochemical system can be obtained by generating complex reactants in corresponding composed models. Moreover, qualitatively learned models with inferred reactants and alternative topologies can be used for further web-lab experimental investigations by biologists of interest, which may result in a better understanding of the system.

  14. Firm performance model in small and medium enterprises (SMEs) based on learning orientation and innovation

    NASA Astrophysics Data System (ADS)

    Lestari, E. R.; Ardianti, F. L.; Rachmawati, L.

    2018-03-01

    This study investigated the relationship between learning orientation, innovation, and firm performance. A conceptual model and hypothesis were empirically examined using structural equation modelling. The study involved a questionnaire-based survey of owners of small and medium enterprises (SMEs) operating in Batu City, Indonesia. The results showed that both variables of learning orientation and innovation effect positively on firm performance. Additionally, learning orientation has positive effect innovation. This study has implication for SMEs aiming at increasing their firm performance based on learning orientation and innovation capability.

  15. A physiologically based toxicokinetic model for methylmercury in female American kestrels

    USGS Publications Warehouse

    Nichols, J.W.; Bennett, R.S.; Rossmann, R.; French, J.B.; Sappington, K.G.

    2010-01-01

    A physiologically based toxicokinetic (PBTK) model was developed to describe the uptake, distribution, and elimination of methylmercury (CH 3Hg) in female American kestrels. The model consists of six tissue compartments corresponding to the brain, liver, kidney, gut, red blood cells, and remaining carcass. Additional compartments describe the elimination of CH3Hg to eggs and growing feathers. Dietary uptake of CH 3Hg was modeled as a diffusion-limited process, and the distribution of CH3Hg among compartments was assumed to be mediated by the flow of blood plasma. To the extent possible, model parameters were developed using information from American kestrels. Additional parameters were based on measured values for closely related species and allometric relationships for birds. The model was calibrated using data from dietary dosing studies with American kestrels. Good agreement between model simulations and measured CH3Hg concentrations in blood and tissues during the loading phase of these studies was obtained by fitting model parameters that control dietary uptake of CH 3Hg and possible hepatic demethylation. Modeled results tended to underestimate the observed effect of egg production on circulating levels of CH3Hg. In general, however, simulations were consistent with observed patterns of CH3Hg uptake and elimination in birds, including the dominant role of feather molt. This model could be used to extrapolate CH 3Hg kinetics from American kestrels to other bird species by appropriate reassignment of parameter values. Alternatively, when combined with a bioenergetics-based description, the model could be used to simulate CH 3Hg kinetics in a long-term environmental exposure. ?? 2010 SETAC.

  16. Defensive Swarm: An Agent Based Modeling Analysis

    DTIC Science & Technology

    2017-12-01

    INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43  1.  Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances

  17. A Reduced Form Model for Ozone Based on Two Decades of CMAQ Simulations for the Continental United States

    EPA Science Inventory

    A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the con...

  18. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  19. Genomic selection of purebred animals for crossbred performance in the presence of dominant gene action

    PubMed Central

    2013-01-01

    Background Genomic selection is an appealing method to select purebreds for crossbred performance. In the case of crossbred records, single nucleotide polymorphism (SNP) effects can be estimated using an additive model or a breed-specific allele model. In most studies, additive gene action is assumed. However, dominance is the likely genetic basis of heterosis. Advantages of incorporating dominance in genomic selection were investigated in a two-way crossbreeding program for a trait with different magnitudes of dominance. Training was carried out only once in the simulation. Results When the dominance variance and heterosis were large and overdominance was present, a dominance model including both additive and dominance SNP effects gave substantially greater cumulative response to selection than the additive model. Extra response was the result of an increase in heterosis but at a cost of reduced purebred performance. When the dominance variance and heterosis were realistic but with overdominance, the advantage of the dominance model decreased but was still significant. When overdominance was absent, the dominance model was slightly favored over the additive model, but the difference in response between the models increased as the number of quantitative trait loci increased. This reveals the importance of exploiting dominance even in the absence of overdominance. When there was no dominance, response to selection for the dominance model was as high as for the additive model, indicating robustness of the dominance model. The breed-specific allele model was inferior to the dominance model in all cases and to the additive model except when the dominance variance and heterosis were large and with overdominance. However, the advantage of the dominance model over the breed-specific allele model may decrease as differences in linkage disequilibrium between the breeds increase. Retraining is expected to reduce the advantage of the dominance model over the alternatives, because in general, the advantage becomes important only after five or six generations post-training. Conclusion Under dominance and without retraining, genomic selection based on the dominance model is superior to the additive model and the breed-specific allele model to maximize crossbred performance through purebred selection. PMID:23621868

  20. Uncertainty Modeling for Structural Control Analysis and Synthesis

    NASA Technical Reports Server (NTRS)

    Campbell, Mark E.; Crawley, Edward F.

    1996-01-01

    The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.

  1. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  2. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    This draft report of Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. Topics covered include:the types of data required use of PBPK models in risk assessment,evaluation of PBPK models for use in risk assessment, andthe application of these models to address uncertainties resulting from extrapolations (e.g. interspecies extrapolation) often used in risk assessment.In addition, appendices are provided that includea compilation of chemical partition coefficients and rate constants,algorithms for estimating chemical-specific parameters, anda list of publications relating to PBPK modeling. This report is primarily meant to serve as a learning tool for EPA scientists and risk assessors who may be less familiar with the field. In addition, this report can be informative to PBPK modelers within and outside the Agency, as it provides an assessment of the types of data and models that the EPA requires for consideration of a model for use in risk assessment.

  3. Metal-Polycyclic Aromatic Hydrocarbon Mixture Toxicity in Hyalella azteca. 1. Response Surfaces and Isoboles To Measure Non-additive Mixture Toxicity and Ecological Risk.

    PubMed

    Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G

    2015-10-06

    Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.

  4. Reranking candidate gene models with cross-species comparison for improved gene prediction

    PubMed Central

    Liu, Qian; Crammer, Koby; Pereira, Fernando CN; Roos, David S

    2008-01-01

    Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc). Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models. PMID:18854050

  5. The Maudsley Model of Family-Based Treatment for Anorexia Nervosa: A Qualitative Evaluation of Parent-to-Parent Consultation

    ERIC Educational Resources Information Center

    Rhodes, Paul; Brown, Jac; Madden, Sloane

    2009-01-01

    This article describes the qualitative analysis of a randomized control trial that explores the use of parent-to-parent consultations as an augmentation to the Maudsley model of family-based treatment for anorexia. Twenty families were randomized into two groups, 10 receiving standard treatment and 10 receiving an additional parent-to-parent…

  6. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  7. Polarimetric subspace target detector for SAR data based on the Huynen dihedral model

    NASA Astrophysics Data System (ADS)

    Larson, Victor J.; Novak, Leslie M.

    1995-06-01

    Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.

  8. The model of encryption algorithm based on non-positional polynomial notations and constructed on an SP-network

    NASA Astrophysics Data System (ADS)

    Kapalova, N.; Haumen, A.

    2018-05-01

    This paper addresses to structures and properties of the cryptographic information protection algorithm model based on NPNs and constructed on an SP-network. The main task of the research is to increase the cryptostrength of the algorithm. In the paper, the transformation resulting in the improvement of the cryptographic strength of the algorithm is described in detail. The proposed model is based on an SP-network. The reasons for using the SP-network in this model are the conversion properties used in these networks. In the encryption process, transformations based on S-boxes and P-boxes are used. It is known that these transformations can withstand cryptanalysis. In addition, in the proposed model, transformations that satisfy the requirements of the "avalanche effect" are used. As a result of this work, a computer program that implements an encryption algorithm model based on the SP-network has been developed.

  9. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for both directions. The spectral resolution of both model parts is defined by the number of B-spline basis functions introduced for longitude and latitude directions related to appropriate coordinate systems. Furthermore, the TLVM has to be developed under the postulation that the global model part will be computed continuously in near real-time (NRT) and routinely predicted into the future by an algorithm based on deterministic and statistical forecast models. Thus, the additional regional densification model part, which will be computed also in NRT, but possibly only for a specified time duration, must be estimated independently from the global one. For that purpose a data separation procedure has to be developed in order to estimate the unknown series coefficients of both model parts independently. This procedure must also consider additional technique-dependent unknowns such as the Differential Code Biases (DCBs) within GNSS and intersystem biases. In this contribution we will present the concept to set up the TLVM including the data combination and the Kalman filtering procedure; first numerical results will be presented.

  10. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    PubMed Central

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  11. [Modeling in value-based medicine].

    PubMed

    Neubauer, A S; Hirneiss, C; Kampik, A

    2010-03-01

    Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.

  12. Adaptive Testing without IRT.

    ERIC Educational Resources Information Center

    Yan, Duanli; Lewis, Charles; Stocking, Martha

    It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…

  13. Quantifying uncertainty and sensitivity in sea ice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  14. Service-Learning in a Capstone Modeling Course

    ERIC Educational Resources Information Center

    Berkove, Ethan

    2013-01-01

    A capstone course is often synthetic, bringing together many components of a student's educational background. For this reason, a project-based course in mathematical modeling makes a great capstone, as modeling problems often require a broad collection of mathematical tools for their solution. The addition of a service-learning component can…

  15. MATLAB-Based Teaching Modules in Biochemical Engineering

    ERIC Educational Resources Information Center

    Lee, Kilho; Comolli, Noelle K.; Kelly, William J.; Huang, Zuyi

    2015-01-01

    Mathematical models play an important role in biochemical engineering. For example, the models developed in the field of systems biology have been used to identify drug targets to treat pathogens such as Pseudomonas aeruginosa in biofilms. In addition, competitive binding models for chromatography processes have been developed to predict expanded…

  16. Modeling the growth and branching of plants: A simple rod-based model

    NASA Astrophysics Data System (ADS)

    Faruk Senan, Nur Adila; O'Reilly, Oliver M.; Tresierras, Timothy N.

    A rod-based model for plant growth and branching is developed in this paper. Specifically, Euler's theory of the elastica is modified to accommodate growth and remodeling. In addition, branching is characterized using a configuration force and evolution equations are postulated for the flexural stiffness and intrinsic curvature. The theory is illustrated with examples of multiple static equilibria of a branched plant and the remodeling and tip growth of a plant stem under gravitational loading.

  17. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  18. a New Multi-Criteria Evaluation Model Based on the Combination of Non-Additive Fuzzy Ahp, Choquet Integral and Sugeno λ-MEASURE

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.

    2017-09-01

    This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.

  19. Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV

    NASA Astrophysics Data System (ADS)

    Savin, Dmitry; Kosov, Mikhail

    2017-09-01

    We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.

  20. Online Statistical Modeling (Regression Analysis) for Independent Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  1. Modeling Self-Healing of Concrete Using Hybrid Genetic Algorithm–Artificial Neural Network

    PubMed Central

    Ramadan Suleiman, Ahmed; Nehdi, Moncef L.

    2017-01-01

    This paper presents an approach to predicting the intrinsic self-healing in concrete using a hybrid genetic algorithm–artificial neural network (GA–ANN). A genetic algorithm was implemented in the network as a stochastic optimizing tool for the initial optimal weights and biases. This approach can assist the network in achieving a global optimum and avoid the possibility of the network getting trapped at local optima. The proposed model was trained and validated using an especially built database using various experimental studies retrieved from the open literature. The model inputs include the cement content, water-to-cement ratio (w/c), type and dosage of supplementary cementitious materials, bio-healing materials, and both expansive and crystalline additives. Self-healing indicated by means of crack width is the model output. The results showed that the proposed GA–ANN model is capable of capturing the complex effects of various self-healing agents (e.g., biochemical material, silica-based additive, expansive and crystalline components) on the self-healing performance in cement-based materials. PMID:28772495

  2. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  3. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants

    PubMed Central

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-01-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier–Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. PMID:24664988

  4. Modeling Self-Healing of Concrete Using Hybrid Genetic Algorithm-Artificial Neural Network.

    PubMed

    Ramadan Suleiman, Ahmed; Nehdi, Moncef L

    2017-02-07

    This paper presents an approach to predicting the intrinsic self-healing in concrete using a hybrid genetic algorithm-artificial neural network (GA-ANN). A genetic algorithm was implemented in the network as a stochastic optimizing tool for the initial optimal weights and biases. This approach can assist the network in achieving a global optimum and avoid the possibility of the network getting trapped at local optima. The proposed model was trained and validated using an especially built database using various experimental studies retrieved from the open literature. The model inputs include the cement content, water-to-cement ratio (w/c), type and dosage of supplementary cementitious materials, bio-healing materials, and both expansive and crystalline additives. Self-healing indicated by means of crack width is the model output. The results showed that the proposed GA-ANN model is capable of capturing the complex effects of various self-healing agents (e.g., biochemical material, silica-based additive, expansive and crystalline components) on the self-healing performance in cement-based materials.

  5. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  6. A novel small animal model to study the replication of simian foamy virus in vivo.

    PubMed

    Blochmann, Rico; Curths, Christoph; Coulibaly, Cheick; Cichutek, Klaus; Kurth, Reinhard; Norley, Stephen; Bannert, Norbert; Fiebig, Uwe

    2014-01-05

    Preclinical evaluation in a small animal model would help the development of gene therapies and vaccines based on foamy virus vectors. The establishment of persistent, non-pathogenic infection with the prototype foamy virus in mice and rabbits has been described previously. To extend this spectrum of available animal models, hamsters were inoculated with infectious cell supernatant or bioballistically with a foamy virus plasmid. In addition, a novel foamy virus from a rhesus macaque was isolated and characterised genetically. Hamsters and mice were infected with this new SFVmac isolate to evaluate whether hamsters are also susceptible to infection. Both hamsters and mice developed humoral responses to either virus subtype. Virus integration and replication in different animal tissues were analysed by PCR and co-cultivation. The results strongly indicate establishment of a persistent infection in hamsters. These studies provide a further small animal model for studying FV-based vectors in addition to the established models. © 2013 Elsevier Inc. All rights reserved.

  7. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  8. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    DOEpatents

    Richardson, John G [Idaho Falls, ID

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  9. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  10. Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.

  11. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  12. Examining the Utility of Topic Models for Linguistic Analysis of Couple Therapy

    ERIC Educational Resources Information Center

    Doeden, Michelle A.

    2012-01-01

    This study examined the basic utility of topic models, a computational linguistics model for text-based data, to the investigation of the process of couple therapy. Linguistic analysis offers an additional lens through which to examine clinical data, and the topic model is presented as a novel methodology within couple and family psychology that…

  13. Analysis of Parametric Adaptive Signal Detection with Applications to Radars and Hyperspectral Imaging

    DTIC Science & Technology

    2010-02-01

    98 8.4.5 Training Screening ............................. .................................................................99 8.5 Experimental...associated with the proposed parametric model. Several im- portant issues are discussed, including model order selection, training screening , and time...parameters associated with the NS-AR model. In addition, we develop model order selection, training screening , and time-series based whitening and

  14. Feeding modes in stream salmonid population models: Is drift feeding the whole story?

    Treesearch

    Bret Harvey; Steve Railsback

    2014-01-01

    Drift-feeding models are essential components of broader models that link stream habitat to salmonid populations and community dynamics. But is an additional feeding mode needed for understanding and predicting salmonid population responses to streamflow and other environmental factors? We addressed this question by applying two versions of the individual-based model...

  15. The Bridges SOI Model School Program at Palo Verde School, Palo Verde, Arizona.

    ERIC Educational Resources Information Center

    Stock, William A.; DiSalvo, Pamela M.

    The Bridges SOI Model School Program is an educational service based upon the SOI (Structure of Intellect) Model School curriculum. For the middle seven months of the academic year, all students in the program complete brief daily exercises that develop specific cognitive skills delineated in the SOI model. Additionally, intensive individual…

  16. The Spatially-Distributed Agroecosystem-Watershed (Ages-W) Hydrologic/Water Quality (H/WQ) model for assessment of conservation effects

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...

  17. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  18. Accounting for Incomplete Species Detection in Fish Community Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta

    2013-01-01

    Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less

  19. Review series: Examples of chronic care model: the home-based chronic care model: redesigning home health for high quality care delivery.

    PubMed

    Suter, Paula; Hennessey, Beth; Florez, Donna; Newton Suter, W

    2011-01-01

    Individuals with chronic obstructive pulmonary disease (COPD) face significant challenges due to frequent distressing dyspnea and deficits related to activities of daily living. Individuals with COPD are often hospitalized frequently for disease exacerbations, negatively impacting quality of life and healthcare expenditure burden. The home-based chronic care model (HBCCM) was designed to address the needs of patients with chronic diseases. This model facilitates the re-design of chronic care delivery within the home health sector by ensuring patient-centered evidence-based care. This HBCCM foundation is Dr. Edward Wagner s chronic care model and has four additional areas of focus: high touch delivery, theory-based self management, specialist oversight and the use of technology. This article will describe this model in detail and outline how model use for patients with COPD can bring value to stakeholders across the health care continuum.

  20. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  1. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  2. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  3. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  4. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  5. Properties of inductive reasoning.

    PubMed

    Heit, E

    2000-12-01

    This paper reviews the main psychological phenomena of inductive reasoning, covering 25 years of experimental and model-based research, in particular addressing four questions. First, what makes a case or event generalizable to other cases? Second, what makes a set of cases generalizable? Third, what makes a property or predicate projectable? Fourth, how do psychological models of induction address these results? The key results in inductive reasoning are outlined, and several recent models, including a new Bayesian account, are evaluated with respect to these results. In addition, future directions for experimental and model-based work are proposed.

  6. Mg I as a probe of the solar chromosphere - The atomic model

    NASA Technical Reports Server (NTRS)

    Mauas, Pablo J.; Avrett, Eugene H.; Loeser, Rudolf

    1988-01-01

    This paper presents a complete atomic model for Mg I line synthesis, where all the atomic parameters are based on recent experimental and theoretical data. It is shown how the computed profiles at 4571 A and 5173 A are influenced by the choice of these parameters and the number of levels included in the model atom. In addition, observed profiles of the 5173 A b2 line and theoretical profiles for comparison (based on a recent atmospheric model for the average quiet sun) are presented.

  7. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  8. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  9. Streamline three-dimensional thermal model of a lithium titanate pouch cell battery in extreme temperature conditions with module simulation

    NASA Astrophysics Data System (ADS)

    Jaguemont, Joris; Omar, Noshin; Martel, François; Van den Bossche, Peter; Van Mierlo, Joeri

    2017-11-01

    In this paper, the development of a three-dimensional (3D) lithium titanium oxide (LTO) pouch cell is presented to first better comprehend its thermal behavior within electrified vehicle applications, but also to propose a strong modeling base for future thermal management system. Current 3D-thermal models are based on electrochemical reactions which are in need for elaborated meshing effort and long computational time. There lacks a fast electro-thermal model which can capture voltage, current and thermal distribution variation during the whole process. The proposed thermal model is a reduce-effort temperature simulation approach involving a 0D-electrical model accommodating a 3D-thermal model to exclude electrochemical processes. The thermal model is based on heat-transfer theory and its temperature distribution prediction incorporates internal conduction and heat generation effect as well as convection. In addition, experimental tests are conducted to validate the model. Results show that both the heat dissipation rate and surface temperature uniformity data are in agreement with simulation results, which satisfies the application requirements for electrified vehicles. Additionally, a LTO battery pack sizing and modeling is also designed, applied and displays a non-uniformity of the cells under driving operation. Ultimately, the model will serve as a basis for the future development of a thermal strategy for LTO cells that operate in a large temperature range, which is a strong contribution to the existing body of scientific literature.

  10. Influence of different factors on the destruction of films based on polylactic acid and oxidized polyethylene

    NASA Astrophysics Data System (ADS)

    Podzorova, M. V.; Tertyshnaya, Yu. V.; Pantyukhov, P. V.; Shibryaeva, L. S.; Popov, A. A.; Nikolaeva, S.

    2016-11-01

    Influence of different environmental factors on the degradation of film samples based on polylactic acid and low density polyethylene with the addition of oxidized polyethylene was studied in this work. Different methods were used to find the relationship between degradation and ultraviolet, moisture, oxygen. It was found that the addition of oxidized polyethylene, used as a model of recycled polyethylene, promotes the degradation of blends.

  11. A simplified approach to quasi-linear viscoelastic modeling

    PubMed Central

    Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.

    2007-01-01

    The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254

  12. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  13. Linking ecophysiological modelling with quantitative genetics to support marker-assisted crop design for improved yields of rice (Oryza sativa) under drought stress

    PubMed Central

    Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C.

    2014-01-01

    Background and Aims Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Methods Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. Key Results To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait ‘total crop nitrogen uptake’ (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10–36 % more yield than those based on markers for yield per se. Conclusions This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. PMID:24984712

  14. Individualized Additional Instruction for Calculus

    ERIC Educational Resources Information Center

    Takata, Ken

    2010-01-01

    College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the…

  15. Modelling the standing timber volume of Baden-Württemberg-A large-scale approach using a fusion of Landsat, airborne LiDAR and National Forest Inventory data

    NASA Astrophysics Data System (ADS)

    Maack, Joachim; Lingenfelder, Marcus; Weinacker, Holger; Koch, Barbara

    2016-07-01

    Remote sensing-based timber volume estimation is key for modelling the regional potential, accessibility and price of lignocellulosic raw material for an emerging bioeconomy. We used a unique wall-to-wall airborne LiDAR dataset and Landsat 7 satellite images in combination with terrestrial inventory data derived from the National Forest Inventory (NFI), and applied generalized additive models (GAM) to estimate spatially explicit timber distribution and volume in forested areas. Since the NFI data showed an underlying structure regarding size and ownership, we additionally constructed a socio-economic predictor to enhance the accuracy of the analysis. Furthermore, we balanced the training dataset with a bootstrap method to achieve unbiased regression weights for interpolating timber volume. Finally, we compared and discussed the model performance of the original approach (r2 = 0.56, NRMSE = 9.65%), the approach with balanced training data (r2 = 0.69, NRMSE = 12.43%) and the final approach with balanced training data and the additional socio-economic predictor (r2 = 0.72, NRMSE = 12.17%). The results demonstrate the usefulness of remote sensing techniques for mapping timber volume for a future lignocellulose-based bioeconomy.

  16. Effective Simulation Strategy of Multiscale Flows using a Lattice Boltzmann model with a Stretched Lattice

    NASA Astrophysics Data System (ADS)

    Yahia, Eman; Premnath, Kannan

    2017-11-01

    Resolving multiscale flow physics (e.g. for boundary layer or mixing layer flows) effectively generally requires the use of different grid resolutions in different coordinate directions. Here, we present a new formulation of a multiple relaxation time (MRT)-lattice Boltzmann (LB) model for anisotropic meshes. It is based on a simpler and more stable non-orthogonal moment basis while the use of MRT introduces additional flexibility, and the model maintains a stream-collide procedure; its second order moment equilibria are augmented with additional velocity gradient terms dependent on grid aspect ratio that fully restores the required isotropy of the transport coefficients of the normal and shear stresses. Furthermore, by introducing additional cubic velocity corrections, it maintains Galilean invariance. The consistency of this stretched lattice based LB scheme with the Navier-Stokes equations is shown via a Chapman-Enskog expansion. Numerical study for a variety of benchmark flow problems demonstrate its ability for accurate and effective simulations at relatively high Reynolds numbers. The MRT-LB scheme is also shown to be more stable compared to prior LB models for rectangular grids, even for grid aspect ratios as small as 0.1 and for Reynolds numbers of 10000.

  17. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  18. Ultimate strength performance of tankers associated with industry corrosion addition practices

    NASA Astrophysics Data System (ADS)

    Kim, Do Kyun; Kim, Han Byul; Zhang, Xiaoming; Li, Chen Guang; Paik, Jeom Kee

    2014-09-01

    In the ship and offshore structure design, age-related problems such as corrosion damage, local denting, and fatigue damage are important factors to be considered in building a reliable structure as they have a significant influence on the residual structural capacity. In shipping, corrosion addition methods are widely adopted in structural design to prevent structural capacity degradation. The present study focuses on the historical trend of corrosion addition rules for ship structural design and investigates their effects on the ultimate strength performance such as hull girder and stiffened panel of double hull oil tankers. Three types of rules based on corrosion addition models, namely historic corrosion rules (pre-CSR), Common Structural Rules (CSR), and harmonised Common Structural Rules (CSRH) are considered and compared with two other corrosion models namely UGS model, suggested by the Union of Greek Shipowners (UGS), and Time-Dependent Corrosion Wastage Model (TDCWM). To identify the general trend in the effects of corrosion damage on the ultimate longitudinal strength performance, the corrosion addition rules are applied to four representative sizes of double hull oil tankers namely Panamax, Aframax, Suezmax, and VLCC. The results are helpful in understanding the trend of corrosion additions for tanker structures

  19. Sensor-Based Optimization Model for Air Quality Improvement in Home IoT

    PubMed Central

    Kim, Jonghyuk

    2018-01-01

    We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market. PMID:29570684

  20. Sensor-Based Optimization Model for Air Quality Improvement in Home IoT.

    PubMed

    Kim, Jonghyuk; Hwangbo, Hyunwoo

    2018-03-23

    We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market.

  1. Modeling of Micro Deval abrasion loss based on some rock properties

    NASA Astrophysics Data System (ADS)

    Capik, Mehmet; Yilmaz, Ali Osman

    2017-10-01

    Aggregate is one of the most widely used construction material. The quality of the aggregate is determined using some testing methods. Among these methods, the Micro Deval Abrasion Loss (MDAL) test is commonly used for the determination of the quality and the abrasion resistance of aggregate. The main objective of this study is to develop models for the prediction of MDAL from rock properties, including uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness, apparent porosity, void ratio Cerchar abrasivity index and Bohme abrasion test are examined. Additionally, the MDAL is modeled using simple regression analysis and multiple linear regression analysis based on the rock properties. The study shows that the MDAL decreases with the increase of uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness and Cerchar abrasivity index. It is also concluded that the MDAL increases with the increase of apparent porosity, void ratio and Bohme abrasion test. The modeling results show that the models based on Bohme abrasion test and L type Schmidt rebound hardness give the better forecasting performances for the MDAL. More models, including the uniaxial compressive strength, the apparent porosity and Cerchar abrasivity index, are developed for the rapid estimation of the MDAL of the rocks. The developed models were verified by statistical tests. Additionally, it can be stated that the proposed models can be used as a forecasting for aggregate quality.

  2. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    Treesearch

    Gretchen G. Moisen; Elizabeth A. Freeman; Jock A. Blackard; Tracey S. Frescino; Niklaus E. Zimmermann; Thomas C. Edwards

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's© See5 and Cubist (for binary and continuous responses,...

  3. Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-12-23

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.

  4. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  5. Integration of Evidence Base into a Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Saile, Lyn; Lopez, Vilma; Bickham, Grandin; Kerstman, Eric; FreiredeCarvalho, Mary; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    INTRODUCTION: A probabilistic decision support model such as the Integrated Medical Model (IMM) utilizes an immense amount of input data that necessitates a systematic, integrated approach for data collection, and management. As a result of this approach, IMM is able to forecasts medical events, resource utilization and crew health during space flight. METHODS: Inflight data is the most desirable input for the Integrated Medical Model. Non-attributable inflight data is collected from the Lifetime Surveillance for Astronaut Health study as well as the engineers, flight surgeons, and astronauts themselves. When inflight data is unavailable cohort studies, other models and Bayesian analyses are used, in addition to subject matters experts input on occasion. To determine the quality of evidence of a medical condition, the data source is categorized and assigned a level of evidence from 1-5; the highest level is one. The collected data reside and are managed in a relational SQL database with a web-based interface for data entry and review. The database is also capable of interfacing with outside applications which expands capabilities within the database itself. Via the public interface, customers can access a formatted Clinical Findings Form (CLiFF) that outlines the model input and evidence base for each medical condition. Changes to the database are tracked using a documented Configuration Management process. DISSCUSSION: This strategic approach provides a comprehensive data management plan for IMM. The IMM Database s structure and architecture has proven to support additional usages. As seen by the resources utilization across medical conditions analysis. In addition, the IMM Database s web-based interface provides a user-friendly format for customers to browse and download the clinical information for medical conditions. It is this type of functionality that will provide Exploratory Medicine Capabilities the evidence base for their medical condition list. CONCLUSION: The IMM Database in junction with the IMM is helping NASA aerospace program improve the health care and reduce risk for the astronauts crew. Both the database and model will continue to expand to meet customer needs through its multi-disciplinary evidence based approach to managing data. Future expansion could serve as a platform for a Space Medicine Wiki of medical conditions.

  6. Covariate adjustment of event histories estimated from Markov chains: the additive approach.

    PubMed

    Aalen, O O; Borgan, O; Fekjaer, H

    2001-12-01

    Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet.

  7. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  8. Sidewalk undermining studies : phase III, field and model studies.

    DOT National Transportation Integrated Search

    1979-01-01

    The results of the early studies of the undermining problems are summarized in the initial portion of this report. Additionally, the design and use of a model sidewalk for testing procedures for preventing undermining are described. Based upon tests ...

  9. COMPILATION OF GROUND-WATER MODELS

    EPA Science Inventory

    Ground-water modeling is a computer-based methodology for mathematical analysis of the mechanisms and controls of ground-water systems for the evaluation of policies, action, and designs that may affect such systems. n addition to satisfying scientific interest in the workings of...

  10. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  11. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  12. Simulation of blood flow in deformable vessels using subject-specific geometry and spatially varying wall properties

    PubMed Central

    Xiong, Guanglei; Figueroa, C. Alberto; Xiao, Nan; Taylor, Charles A.

    2011-01-01

    SUMMARY Simulation of blood flow using image-based models and computational fluid dynamics has found widespread application to quantifying hemodynamic factors relevant to the initiation and progression of cardiovascular diseases and for planning interventions. Methods for creating subject-specific geometric models from medical imaging data have improved substantially in the last decade but for many problems, still require significant user interaction. In addition, while fluid–structure interaction methods are being employed to model blood flow and vessel wall dynamics, tissue properties are often assumed to be uniform. In this paper, we propose a novel workflow for simulating blood flow using subject-specific geometry and spatially varying wall properties. The geometric model construction is based on 3D segmentation and geometric processing. Variable wall properties are assigned to the model based on combining centerline-based and surface-based methods. We finally demonstrate these new methods using an idealized cylindrical model and two subject-specific vascular models with thoracic and cerebral aneurysms. PMID:21765984

  13. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  14. Inclusive practices for children and youths with communication disorders. Ad Hoc Committee on Inclusion for students with Communication Disorders.

    PubMed

    1996-01-01

    An array of inclusive service delivery models is recommended for the implementation of services to children and youths with communication disorders. Inclusive practices are intervention services that are based on the unique and specific needs of the individual, and provided in a context that is least restrictive. There are a variety of models through which inclusive practices can be provided, including a direct (pull-out) program, in classroom-based service delivery, community-based models, and consultative interventions. These models should be seen as flexible options that may change depending on student needs. The speech-language pathologist, in collaboration with parents, the student, teachers, support personnel, and administrators, is in the ideal position to decide the model or combination of models that best serves each individual student's communication needs. Implementation of inclusive practices requires consideration of multiple issues, including general education reform, cost effectiveness, and program efficacy. In addition, administrative and school system support, personnel qualifications, staff development, flexible scheduling, and the effects of inclusive practices on all learners need to be considered. At present, available research suggests guarded optimism for the effectiveness of inclusive practices. However, many critical questions have not yet been addressed and additional research is needed to assess the full impact of inclusive practices for students with communication disorders.

  15. Pedagogical Reasoning and Action: Affordances of Practice-Based Teacher Professional Development

    ERIC Educational Resources Information Center

    Pella, Shannon

    2015-01-01

    A common theme has been consistently woven through the literature on teacher professional development: that practice-based designs and collaboration are two components of effective teacher learning models. In addition to collaboration and practice-based designs, inquiry cycles have been long recognized as catalysts for teacher professional…

  16. Modified signed-digit trinary addition using synthetic wavelet filter

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, K. M.; Razzaque, M. A.

    2000-09-01

    The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.

  17. Conservative Exposure Predictions for Rapid Risk Assessment of Phase-Separated Additives in Medical Device Polymers.

    PubMed

    Chandrasekar, Vaishnavi; Janes, Dustin W; Saylor, David M; Hood, Alan; Bajaj, Akhil; Duncan, Timothy V; Zheng, Jiwen; Isayeva, Irada S; Forrey, Christopher; Casey, Brendan J

    2018-01-01

    A novel approach for rapid risk assessment of targeted leachables in medical device polymers is proposed and validated. Risk evaluation involves understanding the potential of these additives to migrate out of the polymer, and comparing their exposure to a toxicological threshold value. In this study, we propose that a simple diffusive transport model can be used to provide conservative exposure estimates for phase separated color additives in device polymers. This model has been illustrated using a representative phthalocyanine color additive (manganese phthalocyanine, MnPC) and polymer (PEBAX 2533) system. Sorption experiments of MnPC into PEBAX were conducted in order to experimentally determine the diffusion coefficient, D = (1.6 ± 0.5) × 10 -11  cm 2 /s, and matrix solubility limit, C s  = 0.089 wt.%, and model predicted exposure values were validated by extraction experiments. Exposure values for the color additive were compared to a toxicological threshold for a sample risk assessment. Results from this study indicate that a diffusion model-based approach to predict exposure has considerable potential for use as a rapid, screening-level tool to assess the risk of color additives and other small molecule additives in medical device polymers.

  18. Comparing species distribution models constructed with different subsets of environmental predictors

    USGS Publications Warehouse

    Bucklin, David N.; Basille, Mathieu; Benscoter, Allison M.; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.; Speroterra, Carolina; Watling, James I.

    2014-01-01

    Our results indicate that additional predictors have relatively minor effects on the accuracy of climate-based species distribution models and minor to moderate effects on spatial predictions. We suggest that implementing species distribution models with only climate predictors may provide an effective and efficient approach for initial assessments of environmental suitability.

  19. PARADIGM: The Partnership for Advancing Interdisciplinary Global Modeling Annual Report - Year 2

    DTIC Science & Technology

    2004-02-01

    case (a) when bacteria are able to regenerate ammonium based upon the composition of the dissolved organic pool. The export is also slightly larger...for diazotrophs and detritus. The addition of diazotrophs and detritus in the model follow the method of Fennel et al. [2002]. Time series of model

  20. A Modeling Approach to the Development of Students' Informal Inferential Reasoning

    ERIC Educational Resources Information Center

    Doerr, Helen M.; Delmas, Robert; Makar, Katie

    2017-01-01

    Teaching from an informal statistical inference perspective can address the challenge of teaching statistics in a coherent way. We argue that activities that promote model-based reasoning address two additional challenges: providing a coherent sequence of topics and promoting the application of knowledge to novel situations. We take a models and…

  1. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  2. Applying Emax model and bivariate thin plate splines to assess drug interactions

    PubMed Central

    Kong, Maiying; Lee, J. Jack

    2014-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95% point-wise confidence interval as well as its 95% simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies. PMID:20036878

  3. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    PubMed

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  4. Process Modeling and Validation for Metal Big Area Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W.

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology based on the metal arc welding. A continuously fed metal wire is melted by an electric arc that forms between the wire and the substrate, and deposited in the form of a bead of molten metal along the predetermined path. Objects are manufactured one layer at a time starting from the base plate. The final properties of the manufactured object are dependent on its geometry and the metal deposition path, in addition to depending on the basic welding process parameters. Computational modeling can be used to acceleratemore » the development of the mBAAM technology as well as a design and optimization tool for the actual manufacturing process. We have developed a finite element method simulation framework for mBAAM using the new features of software ABAQUS. The computational simulation of material deposition with heat transfer is performed first, followed by the structural analysis based on the temperature history for predicting the final deformation and stress state. In this formulation, we assume that two physics phenomena are coupled in only one direction, i.e. the temperatures are driving the deformation and internal stresses, but their feedback on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less

  5. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  6. NREL Software Aids Offshore Wind Turbine Designs (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-10-01

    NREL researchers are supporting offshore wind power development with computer models that allow detailed analyses of both fixed and floating offshore wind turbines. While existing computer-aided engineering (CAE) models can simulate the conditions and stresses that a land-based wind turbine experiences over its lifetime, offshore turbines require the additional considerations of variations in water depth, soil type, and wind and wave severity, which also necessitate the use of a variety of support-structure types. NREL's core wind CAE tool, FAST, models the additional effects of incident waves, sea currents, and the foundation dynamics of the support structures.

  7. Climate suitability and human influences combined explain the range expansion of an invasive horticultural plant

    Treesearch

    Carolyn M. Beans; Francis F. Kilkenny; Laura F. Galloway

    2012-01-01

    Ecological niche models are commonly used to identify regions at risk of species invasions. Relying on climate alone may limit a model's success when additional variables contribute to invasion. While a climate-based model may predict the future spread of an invasive plant, we hypothesized that a model that combined climate with human influences would most...

  8. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  9. The analysis of a generic air-to-air missile simulation model

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Chappell, Alan R.; Mcmanus, John W.

    1994-01-01

    A generic missile model was developed to evaluate the benefits of using a dynamic missile fly-out simulation system versus a static missile launch envelope system for air-to-air combat simulation. This paper examines the performance of a launch envelope model and a missile fly-out model. The launch envelope model bases its probability of killing the target aircraft on the target aircraft's position at the launch time of the weapon. The benefits gained from a launch envelope model are the simplicity of implementation and the minimal computational overhead required. A missile fly-out model takes into account the physical characteristics of the missile as it simulates the guidance, propulsion, and movement of the missile. The missile's probability of kill is based on the missile miss distance (or the minimum distance between the missile and the target aircraft). The problems associated with this method of modeling are a larger computational overhead, the additional complexity required to determine the missile miss distance, and the additional complexity of determining the reason(s) the missile missed the target. This paper evaluates the two methods and compares the results of running each method on a comprehensive set of test conditions.

  10. Complication Reducing Effect of the Information Technology-Based Diabetes Management System on Subjects with Type 2 Diabetes

    PubMed Central

    Cho, Jae-Hyoung; Lee, Jin-Hee; Oh, Jeong-Ah; Kang, Mi-Ja; Choi, Yoon-Hee; Kwon, Hyuk-Sang; Chang, Sang-Ah; Cha, Bong-Yun; Son, Ho-Young; Yoon, Kun-Ho

    2008-01-01

    Objective We introduced a new information technology-based diabetes management system, called the Internet-based glucose monitoring system (IBGMS), and demonstrated its short-term and long-term favorable effects. However, there has been no report on clinical effects of such a new diabetes management system on the development of diabetic complications so far. This study was used to simulate the complication reducing effect of the IBGMS, given in addition to existing treatments in patients with type 2 diabetes. Research Design and Methods The CORE Diabetes Model, a peer-reviewed, published, validated computer simulation model, was used to project long-term clinical outcomes in type 2 diabetes patients receiving the IBGMS in addition to their existing treatment. The model combined standard Markov submodels to simulate the incidence and progression of diabetes-related complications. Results The addition of IBGMS was associated with improvements in reducing diabetic complications, mainly microangiopathic complications, including diabetic retinopathy, diabetic neuropathy, diabetic nephropathy, and diabetic foot ulcer. The IBGMS also delayed the development of all diabetic complications for more than 1 year. Conclusions This study demonstrated that the simulated IBGMS, compared to existing treatment, was associated with a reduction of diabetic complications. As a result, it provides valuable evidence for practical application to the public in the world. PMID:19885180

  11. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  12. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  13. Enhancing the Value of Population-Based Risk Scores for Institutional-Level Use.

    PubMed

    Raza, Sajjad; Sabik, Joseph F; Rajeswaran, Jeevanantham; Idrees, Jay J; Trezzi, Matteo; Riaz, Haris; Javadikasgari, Hoda; Nowicki, Edward R; Svensson, Lars G; Blackstone, Eugene H

    2016-07-01

    We hypothesized that factors associated with an institution's residual risk unaccounted for by population-based models may be identifiable and used to enhance the value of population-based risk scores for quality improvement. From January 2000 to January 2010, 4,971 patients underwent aortic valve replacement (AVR), either isolated (n = 2,660) or with concomitant coronary artery bypass grafting (AVR+CABG; n = 2,311). Operative mortality and major morbidity and mortality predicted by The Society of Thoracic Surgeons (STS) risk models were compared with observed values. After adjusting for patients' STS score, additional and refined risk factors were sought to explain residual risk. Differences between STS model coefficients (risk-factor strength) and those specific to our institution were calculated. Observed operative mortality was less than predicted for AVR (1.6% [42 of 2,660] vs 2.8%, p < 0.0001) and AVR+CABG (2.6% [59 of 2,311] vs 4.9%, p < 0.0001). Observed major morbidity and mortality was also lower than predicted for isolated AVR (14.6% [389 of 2,660] vs 17.5%, p < 0.0001) and AVR+CABG (20.0% [462 of 2,311] vs 25.8%, p < 0.0001). Shorter height, higher bilirubin, and lower albumin were identified as additional institution-specific risk factors, and body surface area, creatinine, glomerular filtration rate, blood urea nitrogen, and heart failure across all levels of functional class were identified as refined risk-factor variables associated with residual risk. In many instances, risk-factor strength differed substantially from that of STS models. Scores derived from population-based models can be enhanced for institutional level use by adjusting for institution-specific additional and refined risk factors. Identifying these and measuring differences in institution-specific versus population-based risk-factor strength can identify areas to target for quality improvement initiatives. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Witness for Wellness: preliminary findings from a community-academic participatory research mental health initiative.

    PubMed

    Bluthenthal, Ricky N; Jones, Loretta; Fackler-Lowrie, Nicole; Ellison, Marcia; Booker, Theodore; Jones, Felica; McDaniel, Sharon; Moini, Moraya; Williams, Kamau R; Klap, Ruth; Koegel, Paul; Wells, Kenneth B

    2006-01-01

    Quality improvement programs promoting depression screening and appropriate treatment can significantly reduce racial and ethnic disparities in mental-health care and outcomes. However, promoting the adoption of quality-improvement strategies requires more than the simple knowledge of their potential benefits. To better understand depression issues in racial and ethnic minority communities and to discover, refine, and promote the adoption of evidence-based interventions in these communities, a collaborative academic-community participatory partnership was developed and introduced through a community-based depression conference. This partnership was based on the community-influenced model used by Healthy African-American Families, a community-based agency in south Los Angeles, and the Partners in Care model developed at the UCLA/RAND NIMH Health Services Research Center. The integrated model is described in this paper as well as the activities and preliminary results based on multimethod program evaluation techniques. We found that combining the two models was feasible. Significant improvements in depression identification, knowledge about treatment options, and availability of treatment providers were observed among conference participants. In addition, the conference reinforced in the participants the importance of community mobilization for addressing depression and mental health issues in the community. Although the project is relatively new and ongoing, already substantial gains in community activities in the area of depression have been observed. In addition, new applications of this integrated model are underway in the areas of diabetes and substance abuse. Continued monitoring of this project should help refine the model as well as assist in the identification of process and outcome measures for such efforts.

  15. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  16. Structure based classification for bile salt export pump (BSEP) inhibitors using comparative structural modeling of human BSEP

    NASA Astrophysics Data System (ADS)

    Jain, Sankalp; Grandits, Melanie; Richter, Lars; Ecker, Gerhard F.

    2017-06-01

    The bile salt export pump (BSEP) actively transports conjugated monovalent bile acids from the hepatocytes into the bile. This facilitates the formation of micelles and promotes digestion and absorption of dietary fat. Inhibition of BSEP leads to decreased bile flow and accumulation of cytotoxic bile salts in the liver. A number of compounds have been identified to interact with BSEP, which results in drug-induced cholestasis or liver injury. Therefore, in silico approaches for flagging compounds as potential BSEP inhibitors would be of high value in the early stage of the drug discovery pipeline. Up to now, due to the lack of a high-resolution X-ray structure of BSEP, in silico based identification of BSEP inhibitors focused on ligand-based approaches. In this study, we provide a homology model for BSEP, developed using the corrected mouse P-glycoprotein structure (PDB ID: 4M1M). Subsequently, the model was used for docking-based classification of a set of 1212 compounds (405 BSEP inhibitors, 807 non-inhibitors). Using the scoring function ChemScore, a prediction accuracy of 81% on the training set and 73% on two external test sets could be obtained. In addition, the applicability domain of the models was assessed based on Euclidean distance. Further, analysis of the protein-ligand interaction fingerprints revealed certain functional group-amino acid residue interactions that could play a key role for ligand binding. Though ligand-based models, due to their high speed and accuracy, remain the method of choice for classification of BSEP inhibitors, structure-assisted docking models demonstrate reasonably good prediction accuracies while additionally providing information about putative protein-ligand interactions.

  17. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  18. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  19. Applying multibeam sonar and mathematical modeling for mapping seabed substrate and biota of offshore shallows

    NASA Astrophysics Data System (ADS)

    Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander

    2017-06-01

    Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.

  20. A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area

    USGS Publications Warehouse

    Clarke, K.C.; Hoppen, S.; Gaydos, L.

    1997-01-01

    In this paper we describe a cellular automaton (CA) simulation model developed to predict urban growth as part of a project for estimating the regional and broader impact of urbanization on the San Francisco Bay area's climate. The rules of the model are more complex than those of a typical CA and involve the use of multiple data sources, including topography, road networks, and existing settlement distributions, and their modification over time. In addition, the control parameters of the model are allowed to self-modify: that is, the CA adapts itself to the circumstances it generates, in particular, during periods of rapid growth or stagnation. In addition, the model was written to allow the accumulation of probabilistic estimates based on Monte Carlo methods. Calibration of the model has been accomplished by the use of historical maps to compare model predictions of urbanization, based solely upon the distribution in year 1900, with observed data for years 1940, 1954, 1962, 1974, and 1990. The complexity of this model has made calibration a particularly demanding step. Lessons learned about the methods, measures, and strategies developed to calibrate the model may be of use in other environmental modeling contexts. With the calibration complete, the model is being used to generate a set of future scenarios for the San Francisco Bay area along with their probabilities based on the Monte Carlo version of the model. Animated dynamic mapping of the simulations will be used to allow visualization of the impact of future urban growth.

  1. Accounting for dominance to improve genomic evaluations of dairy cows for fertility and milk production traits.

    PubMed

    Aliloo, Hassan; Pryce, Jennie E; González-Recio, Oscar; Cocks, Benjamin G; Hayes, Ben J

    2016-02-01

    Dominance effects may contribute to genetic variation of complex traits in dairy cattle, especially for traits closely related to fitness such as fertility. However, traditional genetic evaluations generally ignore dominance effects and consider additive genetic effects only. Availability of dense single nucleotide polymorphisms (SNPs) panels provides the opportunity to investigate the role of dominance in quantitative variation of complex traits at both the SNP and animal levels. Including dominance effects in the genomic evaluation of animals could also help to increase the accuracy of prediction of future phenotypes. In this study, we estimated additive and dominance variance components for fertility and milk production traits of genotyped Holstein and Jersey cows in Australia. The predictive abilities of a model that accounts for additive effects only (additive), and a model that accounts for both additive and dominance effects (additive + dominance) were compared in a fivefold cross-validation. Estimates of the proportion of dominance variation relative to phenotypic variation that is captured by SNPs, for production traits, were up to 3.8 and 7.1 % in Holstein and Jersey cows, respectively, whereas, for fertility, they were equal to 1.2 % in Holstein and very close to zero in Jersey cows. We found that including dominance in the model was not consistently advantageous. Based on maximum likelihood ratio tests, the additive + dominance model fitted the data better than the additive model, for milk, fat and protein yields in both breeds. However, regarding the prediction of phenotypes assessed with fivefold cross-validation, including dominance effects in the model improved accuracy only for fat yield in Holstein cows. Regression coefficients of phenotypes on genetic values and mean squared errors of predictions showed that the predictive ability of the additive + dominance model was superior to that of the additive model for some of the traits. In both breeds, dominance effects were significant (P < 0.01) for all milk production traits but not for fertility. Accuracy of prediction of phenotypes was slightly increased by including dominance effects in the genomic evaluation model. Thus, it can help to better identify highly performing individuals and be useful for culling decisions.

  2. Stochastic Game Analysis and Latency Awareness for Self-Adaptation

    DTIC Science & Technology

    2014-01-01

    this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to

  3. Modeling contrast agent flow in cerebral aneurysms: comparison of CFD with medical imaging

    NASA Astrophysics Data System (ADS)

    Rayz, Vitaliy; Vali, Alireza; Sigovan, Monica; Lawton, Michael; Saloner, David; Boussel, Loic

    2016-11-01

    PURPOSE: The flow in cerebral aneurysms is routinely assessed with X-ray angiography, an imaging technique based on a contrast agent injection. In addition to requiring a patient's catheterization and radiation exposure, the X-ray angiography may inaccurately estimate the flow residence time, as the injection alters the native blood flow patterns. Numerical modeling of the contrast transport based on MRI imaging, provides a non-invasive alternative for the flow diagnostics. METHODS: The flow in 3 cerebral aneurysms was measured in vivo with 4D PC-MRI, which provides time-resolved, 3D velocity field. The measured velocities were used to simulate a contrast agent transport by solving the advection-diffusion equation. In addition, the flow in the same patient-specific geometries was simulated with CFD and the velocities obtained from the Navier-Stokes solution were used to model the transport of a virtual contrast. RESULTS: Contrast filling and washout patterns obtained in simulations based on MRI-measured velocities were in agreement with those obtained using the Navier-Stokes solution. Some discrepancies were observed in comparison to the X-ray angiography data, as numerical modeling of the contrast transport is based on the native blood flow unaffected by the contrast injection. NIH HL115267.

  4. Low-energy proton induced M X-ray production cross sections for 70Yb, 81Tl and 82Pb

    NASA Astrophysics Data System (ADS)

    Shehla; Mandal, A.; Kumar, Ajay; Roy Chowdhury, M.; Puri, Sanjiv; Tribedi, L. C.

    2018-07-01

    The cross sections for production of Mk (k = Mξ, Mαβ, Mγ, Mm1) X-rays of 70Yb, 81Tl and 82Pb induced by 50-250 keV protons have been measured in the present work. The experimental cross sections have been compared with the earlier reported values and those calculated using the ionization cross sections based on the ECPSSR (Perturbed (P) stationary(S) state(S), incident ion energy (E) loss, Coulomb (C) deflection and relativistic (R) correction) model, the X-ray emission rates based on the Dirac-Fock model, the fluorescence and Coster-Kronig yields based on the Dirac-Hartree-Slater (DHS) model. In addition, the present measured proton induced X-ray production cross sections have also been compared with those calculated using the Dirac-Hartree-Slater (DHS) model based ionization cross sections and those based on the Plane wave Born Approximation (PWBA). The measured M X-ray production cross sections are, in general, found to be higher than the ECPSSR and DHS model based values and lower than the PWBA model based cross sections.

  5. First- and Second-Line Bevacizumab in Addition to Chemotherapy for Metastatic Colorectal Cancer: A United States–Based Cost-Effectiveness Analysis

    PubMed Central

    Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.

    2015-01-01

    Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669

  6. A diagnostic interface for the ICOsahedral Non-hydrostatic (ICON) modelling framework based on the Modular Earth Submodel System (MESSy v2.50)

    NASA Astrophysics Data System (ADS)

    Kern, Bastian; Jöckel, Patrick

    2016-10-01

    Numerical climate and weather models have advanced to finer scales, accompanied by large amounts of output data. The model systems hit the input and output (I/O) bottleneck of modern high-performance computing (HPC) systems. We aim to apply diagnostic methods online during the model simulation instead of applying them as a post-processing step to written output data, to reduce the amount of I/O. To include diagnostic tools into the model system, we implemented a standardised, easy-to-use interface based on the Modular Earth Submodel System (MESSy) into the ICOsahedral Non-hydrostatic (ICON) modelling framework. The integration of the diagnostic interface into the model system is briefly described. Furthermore, we present a prototype implementation of an advanced online diagnostic tool for the aggregation of model data onto a user-defined regular coarse grid. This diagnostic tool will be used to reduce the amount of model output in future simulations. Performance tests of the interface and of two different diagnostic tools show, that the interface itself introduces no overhead in form of additional runtime to the model system. The diagnostic tools, however, have significant impact on the model system's runtime. This overhead strongly depends on the characteristics and implementation of the diagnostic tool. A diagnostic tool with high inter-process communication introduces large overhead, whereas the additional runtime of a diagnostic tool without inter-process communication is low. We briefly describe our efforts to reduce the additional runtime from the diagnostic tools, and present a brief analysis of memory consumption. Future work will focus on optimisation of the memory footprint and the I/O operations of the diagnostic interface.

  7. Quantitative Analyses about Market- and Prevalence-Based Needs for Adapted Physical Education Teachers in the Public Schools in the United States

    ERIC Educational Resources Information Center

    Zhang, Jiabei

    2011-01-01

    The purpose of this study was to analyze quantitative needs for more adapted physical education (APE) teachers based on both market- and prevalence-based models. The market-based need for more APE teachers was examined based on APE teacher positions funded, while the prevalence-based need for additional APE teachers was analyzed based on students…

  8. From in vitro to in vivo: Integration of the virtual cell based assay with physiologically based kinetic modelling.

    PubMed

    Paini, Alicia; Sala Benito, Jose Vicente; Bessems, Jos; Worth, Andrew P

    2017-12-01

    Physiologically based kinetic (PBK) models and the virtual cell based assay can be linked to form so called physiologically based dynamic (PBD) models. This study illustrates the development and application of a PBK model for prediction of estragole-induced DNA adduct formation and hepatotoxicity in humans. To address the hepatotoxicity, HepaRG cells were used as a surrogate for liver cells, with cell viability being used as the in vitro toxicological endpoint. Information on DNA adduct formation was taken from the literature. Since estragole induced cell damage is not directly caused by the parent compound, but by a reactive metabolite, information on the metabolic pathway was incorporated into the model. In addition, a user-friendly tool was developed by implementing the PBK/D model into a KNIME workflow. This workflow can be used to perform in vitro to in vivo extrapolation and forward as backward dosimetry in support of chemical risk assessment. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. A technology path to tactical agent-based modeling

    NASA Astrophysics Data System (ADS)

    James, Alex; Hanratty, Timothy P.

    2017-05-01

    Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

  10. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  11. Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain

    NASA Astrophysics Data System (ADS)

    Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa

    2017-10-01

    Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.

  12. Comparing two remote video survey methods for spatial predictions of the distribution and environmental niche suitability of demersal fishes.

    PubMed

    Galaiduk, Ronen; Radford, Ben T; Wilson, Shaun K; Harvey, Euan S

    2017-12-15

    Information on habitat associations from survey data, combined with spatial modelling, allow the development of more refined species distribution modelling which may identify areas of high conservation/fisheries value and consequentially improve conservation efforts. Generalised additive models were used to model the probability of occurrence of six focal species after surveys that utilised two remote underwater video sampling methods (i.e. baited and towed video). Models developed for the towed video method had consistently better predictive performance for all but one study species although only three models had a good to fair fit, and the rest were poor fits, highlighting the challenges associated with modelling habitat associations of marine species in highly homogenous, low relief environments. Models based on baited video dataset regularly included large-scale measures of structural complexity, suggesting fish attraction to a single focus point by bait. Conversely, models based on the towed video data often incorporated small-scale measures of habitat complexity and were more likely to reflect true species-habitat relationships. The cost associated with use of the towed video systems for surveying low-relief seascapes was also relatively low providing additional support for considering this method for marine spatial ecological modelling.

  13. State-of-the-Art Review on Physiologically Based Pharmacokinetic Modeling in Pediatric Drug Development.

    PubMed

    Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T

    2018-05-18

    Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.

  14. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  15. A residual-based shock capturing scheme for the continuous/discontinuous spectral element solution of the 2D shallow water equations

    NASA Astrophysics Data System (ADS)

    Marras, Simone; Kopera, Michal A.; Constantinescu, Emil M.; Suckale, Jenny; Giraldo, Francis X.

    2018-04-01

    The high-order numerical solution of the non-linear shallow water equations is susceptible to Gibbs oscillations in the proximity of strong gradients. In this paper, we tackle this issue by presenting a shock capturing model based on the numerical residual of the solution. Via numerical tests, we demonstrate that the model removes the spurious oscillations in the proximity of strong wave fronts while preserving their strength. Furthermore, for coarse grids, it prevents energy from building up at small wave-numbers. When applied to the continuity equation to stabilize the water surface, the addition of the shock capturing scheme does not affect mass conservation. We found that our model improves the continuous and discontinuous Galerkin solutions alike in the proximity of sharp fronts propagating on wet surfaces. In the presence of wet/dry interfaces, however, the model needs to be enhanced with the addition of an inundation scheme which, however, we do not address in this paper.

  16. A compact model of the reverse gate-leakage current in GaN-based HEMTs

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoyu; Huang, Junkai; Fang, Jielin; Deng, Wanling

    2016-12-01

    The gate-leakage behavior in GaN-based high electron mobility transistors (HEMTs) is studied as a function of applied bias and temperature. A model to calculate this current is given, which shows that trap-assisted tunneling, trap-assisted Frenkel-Poole (FP) emission, and direct Fowler-Nordheim (FN) tunneling have their main contributions at different electric field regions. In addition, the proposed model clearly illustrates the effect of traps and their assistance to the gate leakage. We have demonstrated the validity of the model by comparisons between model simulation results and measured experimental data of HEMTs, and a good agreement is obtained.

  17. Thermodynamic perturbation theory for fused sphere hard chain fluids using nonadditive interactions

    NASA Astrophysics Data System (ADS)

    Abu-Sharkh, Basel F.; Sunaidi, Abdallah; Hamad, Esam Z.

    2004-03-01

    A model is developed for the equation of state of fused chains based on Wertheim thermodynamic perturbation theory and nonadditive size interactions. The model also assumes that the structure (represented by the radial distribution function) of the fused chain fluid is the same as that of the touching hard sphere chain fluid. The model is completely based on spherical additive and nonadditive size interactions. The model has the advantage of offering good agreement with simulation data while at the same time being independent of fitted parameters. The model is most accurate for short chains, small values of Δ (slightly fused spheres) and at intermediate (liquidlike) densities.

  18. Develop nondestructive rapid pavement quality assurance/quality control evaluation test methods and supporting technology : project summary.

    DOT National Transportation Integrated Search

    2017-01-01

    The findings from the proof of concept with mechanics-based models for flexible base suggest additional validation work should be performed, draft construction specification frameworks should be developed, and work extending the technology to stabili...

  19. Develop nondestructive rapid pavement quality Assurance/quality control evaluation test methods and supporting technology : project summary.

    DOT National Transportation Integrated Search

    2017-01-01

    The findings from the proof of concept with mechanics-based models for flexible base suggest additional validation work should be performed, draft construction specification frameworks should be developed, and work extending the technology to stabili...

  20. Heat capacities and volumetric changes in the glass transition range: a constitutive approach based on the standard linear solid

    NASA Astrophysics Data System (ADS)

    Lion, Alexander; Mittermeier, Christoph; Johlitz, Michael

    2017-09-01

    A novel approach to represent the glass transition is proposed. It is based on a physically motivated extension of the linear viscoelastic Poynting-Thomson model. In addition to a temperature-dependent damping element and two linear springs, two thermal strain elements are introduced. In order to take the process dependence of the specific heat into account and to model its characteristic behaviour below and above the glass transition, the Helmholtz free energy contains an additional contribution which depends on the temperature history and on the current temperature. The model describes the process-dependent volumetric and caloric behaviour of glass-forming materials, and defines a functional relationship between pressure, volumetric strain, and temperature. If a model for the isochoric part of the material behaviour is already available, for example a model of finite viscoelasticity, the caloric and volumetric behaviour can be represented with the current approach. The proposed model allows computing the isobaric and isochoric heat capacities in closed form. The difference c_p -c_v is process-dependent and tends towards the classical expression in the glassy and equilibrium ranges. Simulations and theoretical studies demonstrate the physical significance of the model.

  1. Das Bremerhavener Grundwasser im Klimawandel - Eine FREEWAT-Fallstudie

    NASA Astrophysics Data System (ADS)

    Panteleit, Björn; Jensen, Sven; Seiter, Katherina; Siebert, Yvonne

    2018-01-01

    A 3D structural model was created for the state of Bremen based on an extensive borehole database. Parameters were assigned to the model by interpretation and interpolation of the borehole descriptions. This structural model was transferred into a flow model via the FREEWAT platform, an open-source plug-in of the free QGIS software, with connection to the MODFLOW code. This groundwater management tool is intended for long-term use. As a case study for the FREEWAT Project, possible effects of climate change on groundwater levels in the Bremerhaven area have been simulated. In addition to the calibration year 2010, scenarios with a sea-level rise and decreasing groundwater recharge were simulated for the years 2040, 2070 and 2100. In addition to seawater intrusion in the coastal area, declining groundwater levels are also a concern. Possibilities for future groundwater management already include active control of the water level of a lake and the harbor basin. With the help of a focused groundwater monitoring program based on the model results, the planned flow model can become an important forecasting tool for groundwater management within the framework of the planned continuous model management and for representing the effects of changing climatic conditions and mitigation measures.

  2. Temperature-Dependent Conformations of Model Viscosity Index Improvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramasamy, Uma Shantini; Cosimbescu, Lelia; Martini, Ashlie

    2015-05-01

    Lubricants are comprised of base oils and additives where additives are chemicals that are deliberately added to the oil to enhance properties and inhibit degradation of the base oils. Viscosity index (VI) improvers are an important class of additives that reduce the decline of fluid viscosity with temperature [1], enabling optimum lubricant performance over a wider range of operating temperatures. These additives are typically high molecular weight polymers, such as, but not limited to, polyisobutylenes, olefin copolymer, and polyalkylmethacrylates, that are added in concentrations of 2-5% (w/w). Appropriate polymers, when dissolved in base oil, expand from a coiled to anmore » uncoiled state with increasing temperature [2]. The ability of VI additives to increase their molar volume and improve the temperature-viscosity dependence of lubricants suggests there is a strong relationship between molecular structure and additive functionality [3]. In this work, we aim to quantify the changes in polymer size with temperature for four polyisobutylene (PIB) based molecular structures at the nano-scale using molecular simulation tools. As expected, the results show that the polymers adopt more conformations at higher temperatures, and there is a clear indication that the expandability of a polymer is strongly influenced by molecular structure.« less

  3. Standard solar model

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Kim, Y.-C.; Pinsonneault, M. H.

    1992-01-01

    A set of solar models have been constructed, each based on a single modification to the physics of a reference solar model. In addition, a model combining several of the improvements has been calculated to provide a best solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The impact on both the structure and the frequencies of the low-l p-modes of the model to these improvements are discussed. It is found that the combined solar model, which is based on the best physics available (and does not contain any ad hoc assumptions), reproduces the observed oscillation spectrum (for low-l) within the errors associated with the uncertainties in the model physics (primarily opacities).

  4. Visual features as stepping stones toward semantics: Explaining object similarity in IT and perception with non-negative least squares

    PubMed Central

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke

    2016-01-01

    Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748

  5. Visual features as stepping stones toward semantics: Explaining object similarity in IT and perception with non-negative least squares.

    PubMed

    Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke

    2016-03-01

    Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Numerical simulation of residual stress in laser based additive manufacturing process

    NASA Astrophysics Data System (ADS)

    Kalyan Panda, Bibhu; Sahoo, Seshadev

    2018-03-01

    Minimizing the residual stress build-up in metal-based additive manufacturing plays a pivotal role in selecting a particular material and technique for making an industrial part. In beam-based additive manufacturing, although a great deal of effort has been made to minimize the residual stresses, it is still elusive how to do so by simply optimizing the processing parameters, such as beam size, beam power, and scan speed. Amid different types of additive manufacturing processes, Direct Metal Laser Sintering (DMLS) process uses a high-power laser to melt and sinter layers of metal powder. The rapid solidification and heat transfer on powder bed endows a high cooling rate which leads to the build-up of residual stresses, that will affect the mechanical properties of the build parts. In the present work, the authors develop a numerical thermo-mechanical model for the measurement of residual stress in the AlSi10Mg build samples by using finite element method. Transient temperature distribution in the powder bed was assessed using the coupled thermal to structural model. Subsequently, the residual stresses were estimated with varying laser power. From the simulation result, it found that the melt pool dimensions increase with increasing the laser power and the magnitude of residual stresses in the built part increases.

  7. The use of a numerical method to justify the criteria for the maximum settlement of the tank foundation

    NASA Astrophysics Data System (ADS)

    Tarasenko, Alexander; Chepur, Petr; Gruchenkova, Alesya

    2017-11-01

    The article examines the problem of assessing the permissible values of uneven settlement for a vertical steel tank base and foundation. A numerical experiment was performed using a finite element model of the tank. The model took into account the geometric shape of the structure and its additional stiffening elements that affect the stress-strain state of the tank. An equation was obtained that allowed determining the maximum possible deformation of the bottom outer contour during uneven settlement. Depending on the length of the uneven settlement zone, the values of the permissible settlement of the tank base were determined. The article proposes new values of the maximum permissible tank settlement with additional stiffening elements.

  8. Modular Architecture for Integrated Model-Based Decision Support.

    PubMed

    Gaebel, Jan; Schreiber, Erik; Oeser, Alexander; Oeltze-Jafra, Steffen

    2018-01-01

    Model-based decision support systems promise to be a valuable addition to oncological treatments and the implementation of personalized therapies. For the integration and sharing of decision models, the involved systems must be able to communicate with each other. In this paper, we propose a modularized architecture of dedicated systems for the integration of probabilistic decision models into existing hospital environments. These systems interconnect via web services and provide model sharing and processing capabilities for clinical information systems. Along the lines of IHE integration profiles from other disciplines and the meaningful reuse of routinely recorded patient data, our approach aims for the seamless integration of decision models into hospital infrastructure and the physicians' daily work.

  9. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    NASA Astrophysics Data System (ADS)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  10. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  11. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants.

    PubMed

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-11-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier-Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. © 2014 Wiley Periodicals, Inc.

  12. Tuned and non-Higgsable U(1)s in F-theory

    DOE PAGES

    Wang, Yi-Nan

    2017-03-01

    We study the tuning of U(1) gauge fields in F-theory models on a base of general dimension. We construct a formula that computes the change in Weierstrass moduli when such a U(1) is tuned, based on the Morrison-Park form of a Weierstrass model with an additional rational section. Using this formula, we propose the form of “minimal tuning” on any base, which corresponds to the case where the decrease in the number of Weierstrass moduli is minimal. Applying this result, we discover some universal features of bases with non-Higgsable U(1)s. Mathematically, a generic elliptic fibration over such a base hasmore » additional rational sections. Physically, this condition implies the existence of U(1) gauge group in the low-energy supergravity theory after compactification that cannot be Higgsed away. In particular, we show that the elliptic Calabi-Yau manifold over such a base has a small number of complex structure moduli. We also suggest that non-Higgsable U(1)s can never appear on any toric bases. Finally, we construct the first example of a threefold base with non-Higgsable U(1)s.« less

  13. A game theory-based trust measurement model for social networks.

    PubMed

    Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong

    2016-01-01

    In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.

  14. Marker-Based Estimates Reveal Significant Non-additive Effects in Clonally Propagated Cassava (Manihot esculenta): Implications for the Prediction of Total Genetic Value and the Selection of Varieties.

    PubMed

    Wolfe, Marnin D; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc

    2016-08-31

    In clonally propagated crops, non-additive genetic effects can be effectively exploited by the identification of superior genetic individuals as varieties. Cassava (Manihot esculenta Crantz) is a clonally propagated staple food crop that feeds hundreds of millions. We quantified the amount and nature of non-additive genetic variation for three key traits in a breeding population of cassava from sub-Saharan Africa using additive and non-additive genome-wide marker-based relationship matrices. We then assessed the accuracy of genomic prediction for total (additive plus non-additive) genetic value. We confirmed previous findings based on diallel populations, that non-additive genetic variation is significant for key cassava traits. Specifically, we found that dominance is particularly important for root yield and epistasis contributes strongly to variation in CMD resistance. Further, we showed that total genetic value predicted observed phenotypes more accurately than additive only models for root yield but not for dry matter content, which is mostly additive or for CMD resistance, which has high narrow-sense heritability. We address the implication of these results for cassava breeding and put our work in the context of previous results in cassava, and other plant and animal species. Copyright © 2016 Author et al.

  15. Models of care for the management of hepatitis C virus among people who inject drugs: one size does not fit all.

    PubMed

    Bruggmann, Philip; Litwin, Alain H

    2013-08-01

    One of the major obstacles to hepatitis C virus (HCV) care in people who inject drugs (PWID) is the lack of treatment settings that are suitably adapted for the needs of this vulnerable population. Nevertheless, HCV treatment has been delivered successfully to PWID through various multidisciplinary models such as community-based clinics, substance abuse treatment clinics, and specialized hospital-based clinics. Models may be integrated in primary care--all under one roof in either addiction care units or general practitioner-based models--or can occur in secondary or tertiary care settings. Additional innovative models include directly observed therapy and peer-based models. A high level of acceptance of the individual life circumstances of PWID rather than rigid exclusion criteria will determine the level of success of any model of HCV management. The impact of highly potent and well-tolerated interferon-free HCV treatment regimens will remain negligible as long as access to therapy cannot be expanded to the most affected risk groups.

  16. A twin study of specific bulimia nervosa symptoms.

    PubMed

    Mazzeo, S E; Mitchell, K S; Bulik, C M; Aggen, S H; Kendler, K S; Neale, M C

    2010-07-01

    Twin studies have suggested that additive genetic factors significantly contribute to liability to bulimia nervosa (BN). However, the diagnostic criteria for BN remain controversial. In this study, an item-factor model was used to examine the BN diagnostic criteria and the genetic and environmental contributions to BN in a population-based twin sample. The validity of the equal environment assumption (EEA) for BN was also tested. Participants were 1024 female twins (MZ n=614, DZ n=410) from the population-based Mid-Atlantic Twin Registry. BN was assessed using symptom-level (self-report) items consistent with DSM-IV and ICD-10 diagnostic criteria. Items assessing BN were included in an item-factor model. The EEA was measured by items assessing similarity of childhood and adolescent environment, which have demonstrated construct validity. Scores on the EEA factor were used to specify the degree to which twins shared environmental experiences in this model. The EEA was not violated for BN. Modeling results indicated that the majority of the variance in BN was due to additive genetic factors. There was substantial variability in additive genetic and environmental contributions to specific BN symptoms. Most notably, vomiting was very strongly influenced by additive genetic factors, while other symptoms were much less heritable, including the influence of weight on self-evaluation. These results highlight the importance of assessing eating disorders at the symptom level. Refinement of eating disorder phenotypes could ultimately lead to improvements in treatment and targeted prevention, by clarifying sources of variation for specific components of symptomatology.

  17. Prioritizing Conservation of Ungulate Calving Resources in Multiple-Use Landscapes

    PubMed Central

    Dzialak, Matthew R.; Harju, Seth M.; Osborn, Robert G.; Wondzell, John J.; Hayden-Wing, Larry D.; Winstead, Jeffrey B.; Webb, Stephen L.

    2011-01-01

    Background Conserving animal populations in places where human activity is increasing is an ongoing challenge in many parts of the world. We investigated how human activity interacted with maternal status and individual variation in behavior to affect reliability of spatially-explicit models intended to guide conservation of critical ungulate calving resources. We studied Rocky Mountain elk (Cervus elaphus) that occupy a region where 2900 natural gas wells have been drilled. Methodology/Principal Findings We present novel applications of generalized additive modeling to predict maternal status based on movement, and of random-effects resource selection models to provide population and individual-based inference on the effects of maternal status and human activity. We used a 2×2 factorial design (treatment vs. control) that included elk that were either parturient or non-parturient and in areas either with or without industrial development. Generalized additive models predicted maternal status (parturiency) correctly 93% of the time based on movement. Human activity played a larger role than maternal status in shaping resource use; elk showed strong spatiotemporal patterns of selection or avoidance and marked individual variation in developed areas, but no such pattern in undeveloped areas. This difference had direct consequences for landscape-level conservation planning. When relative probability of use was calculated across the study area, there was disparity throughout 72–88% of the landscape in terms of where conservation intervention should be prioritized depending on whether models were based on behavior in developed areas or undeveloped areas. Model validation showed that models based on behavior in developed areas had poor predictive accuracy, whereas the model based on behavior in undeveloped areas had high predictive accuracy. Conclusions/Significance By directly testing for differences between developed and undeveloped areas, and by modeling resource selection in a random-effects framework that provided individual-based inference, we conclude that: 1) amplified selection or avoidance behavior and individual variation, as responses to increasing human activity, complicate conservation planning in multiple-use landscapes, and 2) resource selection behavior in places where human activity is predictable or less dynamic may provide a more reliable basis from which to prioritize conservation action. PMID:21297866

  18. A Physiologically-Based Pharmacokinetic (PBPK) Model With Metabolic Interactions of Chloroform (CHCL3) and Trichloroethylene

    EPA Science Inventory

    Exposure to mixtures is frequent, but biologic pathways such as metabolic inhibition, are poorly understood. CHCl3 and TCE are model volatiles frequently co-occurring; combined exposure results in less than additive hepatotoxicity. Here, we explore the underlying metabolic inte...

  19. Mathematical and experimental investigations of modeling, simulation and experiment to promote the life-cycle of polymer modified asphalt.

    DOT National Transportation Integrated Search

    2014-07-01

    The formulation of constitutive equations for asphaltic pavement is based on rheological models which include the asphalt mixture, additives, and the bitumen. In terms of the asphalt, the rheology addresses the flow and permanent deformation in time,...

  20. Genomic BLUP including additive and dominant variation in purebreds and F1 crossbreds, with an application in pigs.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Elsen, Jean-Michel; Misztal, Ignacy; Herring, William; Legarra, Andrès

    2016-01-29

    Most developments in quantitative genetics theory focus on the study of intra-breed/line concepts. With the availability of massive genomic information, it becomes necessary to revisit the theory for crossbred populations. We propose methods to construct genomic covariances with additive and non-additive (dominance) inheritance in the case of pure lines and crossbred populations. We describe substitution effects and dominant deviations across two pure parental populations and the crossbred population. Gene effects are assumed to be independent of the origin of alleles and allelic frequencies can differ between parental populations. Based on these assumptions, the theoretical variance components (additive and dominant) are obtained as a function of marker effects and allelic frequencies. The additive genetic variance in the crossbred population includes the biological additive and dominant effects of a gene and a covariance term. Dominance variance in the crossbred population is proportional to the product of the heterozygosity coefficients of both parental populations. A genomic BLUP (best linear unbiased prediction) equivalent model is presented. We illustrate this approach by using pig data (two pure lines and their cross, including 8265 phenotyped and genotyped sows). For the total number of piglets born, the dominance variance in the crossbred population represented about 13 % of the total genetic variance. Dominance variation is only marginally important for litter size in the crossbred population. We present a coherent marker-based model that includes purebred and crossbred data and additive and dominant actions. Using this model, it is possible to estimate breeding values, dominant deviations and variance components in a dataset that comprises data on purebred and crossbred individuals. These methods can be exploited to plan assortative mating in pig, maize or other species, in order to generate superior crossbred individuals in terms of performance.

  1. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  2. Crowd evacuation model based on bacterial foraging algorithm

    NASA Astrophysics Data System (ADS)

    Shibiao, Mu; Zhijun, Chen

    To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.

  3. Quantitative Prediction of Drug–Drug Interactions Involving Inhibitory Metabolites in Drug Development: How Can Physiologically Based Pharmacokinetic Modeling Help?

    PubMed Central

    Chen, Y; Mao, J; Lin, J; Yu, H; Peters, S; Shebley, M

    2016-01-01

    This subteam under the Drug Metabolism Leadership Group (Innovation and Quality Consortium) investigated the quantitative role of circulating inhibitory metabolites in drug–drug interactions using physiologically based pharmacokinetic (PBPK) modeling. Three drugs with major circulating inhibitory metabolites (amiodarone, gemfibrozil, and sertraline) were systematically evaluated in addition to the literature review of recent examples. The application of PBPK modeling in drug interactions by inhibitory parent–metabolite pairs is described and guidance on strategic application is provided. PMID:27642087

  4. A Measurement and Power Line Communication System Design for Renewable Smart Grids

    NASA Astrophysics Data System (ADS)

    Kabalci, E.; Kabalci, Y.

    2013-10-01

    The data communication over the electric power lines can be managed easily and economically since the grid connections are already spread around all over the world. This paper investigates the applicability of Power Line Communication (PLC) in an energy generation system that is based on photovoltaic (PV) panels with the modeling study in Matlab/Simulink. The Simulink model covers the designed PV panels, boost converter with Perturb and Observe (P&O) control algorithm, full bridge inverter, and the binary phase shift keying (BPSK) modem that is utilized to transfer the measured data over the power lines. This study proposes a novel method to use the electrical power lines not only for carrying the line voltage but also to transmit the measurements of the renewable energy generation plants. Hence, it is aimed at minimizing the additional monitoring costs such as SCADA, Ethernet-based or GSM based systems by using the proposed technique. Although this study is performed with solar power plants, the proposed model can be applied to other renewable generation systems. Consequently, the usage of the proposed technique instead of SCADA or Ethernet-based systems eliminates additional monitoring costs.

  5. The clinical and cost burden of coronary calcification in a Medicare cohort: An economic model to address under-reporting and misclassification.

    PubMed

    Garrison, Louis P; Lewin, Jack; Young, Christopher H; Généreux, Philippe; Crittendon, Janna; Mann, Marita R; Brindis, Ralph G

    2015-01-01

    Coronary artery calcification (CAC) is a well-established risk factor for the occurrence of adverse ischemic events. However, the economic impact of the presence of CAC is unknown. Through an economic model analysis, we sought to estimate the incremental impact of CAC on medical care costs and patient mortality for de novo percutaneous coronary intervention (PCI) patients in the 2012 cohort of the Medicare elderly (≥65) population. This aggregate burden-of-illness study is incidence-based, focusing on cost and survival outcomes for an annual Medicare cohort based on the recently introduced ICD9 code for CAC. The cost analysis uses a one-year horizon, and the survival analysis considers lost life years and their economic value. For calendar year 2012, an estimated 200,945 index (de novo) PCI procedures were performed in this cohort. An estimated 16,000 Medicare beneficiaries (7.9%) were projected to have had severe CAC, generating an additional cost in the first year following their PCI of $3500, on average, or $56 million in total. In terms of mortality, the model projects that an additional 397 deaths would be attributable to severe CAC in 2012, resulting in 3770 lost life years, representing an estimated loss of about $377 million, when valuing lost life years at $100,000 each. These model-based CAC estimates, considering both moderate and severe CAC patients, suggest an annual burden of illness approaching $1.3 billion in this PCI cohort. The potential clinical and cost consequences of CAC warrant additional clinical and economic attention not only on PCI strategies for particular patients but also on reporting and coding to achieve better evidence-based decision-making. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Spatial analysis of plague in California: niche modeling predictions of the current distribution and potential response to climate change

    PubMed Central

    Holt, Ashley C; Salkeld, Daniel J; Fritz, Curtis L; Tucker, James R; Gong, Peng

    2009-01-01

    Background Plague, caused by the bacterium Yersinia pestis, is a public and wildlife health concern in California and the western United States. This study explores the spatial characteristics of positive plague samples in California and tests Maxent, a machine-learning method that can be used to develop niche-based models from presence-only data, for mapping the potential distribution of plague foci. Maxent models were constructed using geocoded seroprevalence data from surveillance of California ground squirrels (Spermophilus beecheyi) as case points and Worldclim bioclimatic data as predictor variables, and compared and validated using area under the receiver operating curve (AUC) statistics. Additionally, model results were compared to locations of positive and negative coyote (Canis latrans) samples, in order to determine the correlation between Maxent model predictions and areas of plague risk as determined via wild carnivore surveillance. Results Models of plague activity in California ground squirrels, based on recent climate conditions, accurately identified case locations (AUC of 0.913 to 0.948) and were significantly correlated with coyote samples. The final models were used to identify potential plague risk areas based on an ensemble of six future climate scenarios. These models suggest that by 2050, climate conditions may reduce plague risk in the southern parts of California and increase risk along the northern coast and Sierras. Conclusion Because different modeling approaches can yield substantially different results, care should be taken when interpreting future model predictions. Nonetheless, niche modeling can be a useful tool for exploring and mapping the potential response of plague activity to climate change. The final models in this study were used to identify potential plague risk areas based on an ensemble of six future climate scenarios, which can help public managers decide where to allocate surveillance resources. In addition, Maxent model results were significantly correlated with coyote samples, indicating that carnivore surveillance programs will continue to be important for tracking the response of plague to future climate conditions. PMID:19558717

  7. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.

    PubMed

    McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey

    2012-04-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  8. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    NASA Astrophysics Data System (ADS)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.

  9. An empirical approach to modeling methylmercury concentrations in an Adirondack stream watershed

    USGS Publications Warehouse

    Burns, Douglas A.; Nystrom, Elizabeth A.; Wolock, David M.; Bradley, Paul M.; Riva-Murray, Karen

    2014-01-01

    Inverse empirical models can inform and improve more complex process-based models by quantifying the principal factors that control water quality variation. Here we developed a multiple regression model that explains 81% of the variation in filtered methylmercury (FMeHg) concentrations in Fishing Brook, a fourth-order stream in the Adirondack Mountains, New York, a known “hot spot” of Hg bioaccumulation. This model builds on previous observations that wetland-dominated riparian areas are the principal source of MeHg to this stream and were based on 43 samples collected during a 33 month period in 2007–2009. Explanatory variables include those that represent the effects of water temperature, streamflow, and modeled riparian water table depth on seasonal and annual patterns of FMeHg concentrations. An additional variable represents the effects of an upstream pond on decreasing FMeHg concentrations. Model results suggest that temperature-driven effects on net Hg methylation rates are the principal control on annual FMeHg concentration patterns. Additionally, streamflow dilutes FMeHg concentrations during the cold dormant season. The model further indicates that depth and persistence of the riparian water table as simulated by TOPMODEL are dominant controls on FMeHg concentration patterns during the warm growing season, especially evident when concentrations during the dry summer of 2007 were less than half of those in the wetter summers of 2008 and 2009. This modeling approach may help identify the principal factors that control variation in surface water FMeHg concentrations in other settings, which can guide the appropriate application of process-based models.

  10. Model-Based Spectrum Management. Part 1: Modeling and Computation Manual, Version 2.0

    DTIC Science & Technology

    2013-12-01

    Occurrence of Occlusion by the Earth’s Surface C- 4 Figure C-6. Scenario for Evaluating the Significance of Angle Discrepancy in Using Planar...their transmit power at those locations. Many developers of DSA systems seek more aggressive sharing that favors behaviors allowing compatible reuse...provide behavioral guidance that allows finer coexistence mechanisms, e.g., mechanisms based on sensing and timing in addition to location as means to

  11. Impact of fitting dominance and additive effects on accuracy of genomic prediction of breeding values in layers.

    PubMed

    Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M

    2016-10-01

    Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.

  12. Formal Analysis of Self-Efficacy in Job Interviewee’s Mental State Model

    NASA Astrophysics Data System (ADS)

    Ajoge, N. S.; Aziz, A. A.; Yusof, S. A. Mohd

    2017-08-01

    This paper presents a formal analysis approach for self-efficacy model of interviewee’s mental state during a job interview session. Self-efficacy is a construct that has been hypothesised to combine with motivation and interviewee anxiety to define state influence of interviewees. The conceptual model was built based on psychological theories and models related to self-efficacy. A number of well-known relations between events and the course of self-efficacy are summarized from the literature and it is shown that the proposed model exhibits those patterns. In addition, this formal model has been mathematically analysed to find out which stable situations exist. Finally, it is pointed out how this model can be used in a software agent or robot-based platform. Such platform can provide an interview coaching approach where support to the user is provided based on their individual metal state during interview sessions.

  13. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  14. Improving students’ mathematical critical thinking through rigorous teaching and learning model with informal argument

    NASA Astrophysics Data System (ADS)

    Hamid, H.

    2018-01-01

    The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.

  15. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  16. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  17. The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval

    DTIC Science & Technology

    2006-07-01

    reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We

  18. A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling

    PubMed Central

    Zhang, Chunxi; Lin, Tie

    2016-01-01

    In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method. PMID:27483270

  19. A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling.

    PubMed

    Zhang, Chunxi; Lin, Tie

    2016-07-28

    In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method.

  20. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  1. Forecasting of primary energy consumption data in the United States: A comparison between ARIMA and Holter-Winters models

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Ahmar, A. S.

    2017-09-01

    This research has a purpose to compare ARIMA Model and Holt-Winters Model based on MAE, RSS, MSE, and RMS criteria in predicting Primary Energy Consumption Total data in the US. The data from this research ranges from January 1973 to December 2016. This data will be processed by using R Software. Based on the results of data analysis that has been done, it is found that the model of Holt-Winters Additive type (MSE: 258350.1) is the most appropriate model in predicting Primary Energy Consumption Total data in the US. This model is more appropriate when compared with Holt-Winters Multiplicative type (MSE: 262260,4) and ARIMA Seasonal model (MSE: 723502,2).

  2. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  3. Linking ecophysiological modelling with quantitative genetics to support marker-assisted crop design for improved yields of rice (Oryza sativa) under drought stress.

    PubMed

    Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C

    2014-09-01

    Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait 'total crop nitrogen uptake' (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10-36 % more yield than those based on markers for yield per se. This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Nighttime Aerosol Optical Depth Measurements Using a Ground-based Lunar Photometer

    NASA Technical Reports Server (NTRS)

    Berkoff, Tim; Omar, Ali; Haggard, Charles; Pippin, Margaret; Tasaddaq, Aasam; Stone, Tom; Rodriguez, Jon; Slutsker, Ilya; Eck, Tom; Holben, Brent; hide

    2015-01-01

    In recent years it was proposed to combine AERONET network photometer capabilities with a high precision lunar model used for satellite calibration to retrieve columnar nighttime AODs. The USGS lunar model can continuously provide pre-atmosphere high precision lunar irradiance determinations for multiple wavelengths at ground sensor locations. When combined with measured irradiances from a ground-based AERONET photometer, atmospheric column transmissions can determined yielding nighttime column aerosol AOD and Angstrom coefficients. Additional demonstrations have utilized this approach to further develop calibration methods and to obtain data in polar regions where extended periods of darkness occur. This new capability enables more complete studies of the diurnal behavior of aerosols, and feedback for models and satellite retrievals for the nighttime behavior of aerosols. It is anticipated that the nighttime capability of these sensors will be useful for comparisons with satellite lidars such as CALIOP and CATS in additional to ground-based lidars in MPLNET at night, when the signal-to-noise ratio is higher than daytime and more precise AOD comparisons can be made.

  5. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

    PubMed

    Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

    2018-05-12

    Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

  6. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  7. Testing a level of response to alcohol-based model of heavy drinking and alcohol problems in 1,905 17-year-olds.

    PubMed

    Schuckit, Marc A; Smith, Tom L; Heron, Jon; Hickman, Matthew; Macleod, John; Lewis, Glyn; Davis, John M; Hibbeln, Joseph R; Brown, Sandra; Zuccolo, Luisa; Miller, Laura L; Davey-Smith, George

    2011-10-01

    The low level of response (LR) to alcohol is one of several genetically influenced characteristics that increase the risk for heavy drinking and alcohol problems. Efforts to understand how LR operates through additional life influences have been carried out primarily in modest-sized U.S.-based samples with limited statistical power, raising questions about generalizability and about the importance of components with smaller effects. This study evaluates a full LR-based model of risk in a large sample of adolescents from the United Kingdom. Cross-sectional structural equation models were used for the approximate first half of the age 17 subjects assessed by the Avon Longitudinal Study of Parents and Children, generating data on 1,905 adolescents (mean age 17.8 years, 44.2% boys). LR was measured with the Self-Rating of the Effects of Alcohol Questionnaire, outcomes were based on drinking quantities and problems, and standardized questionnaires were used to evaluate peer substance use, alcohol expectancies, and using alcohol to cope with stress. In this young and large U.K. sample, a low LR related to more adverse alcohol outcomes both directly and through partial mediation by all 3 additional key variables (peer substance use, expectancies, and coping). The models were similar in boys and girls. These results confirm key elements of the hypothesized LR-based model in a large U.K. sample, supporting some generalizability beyond U.S. groups. They also indicate that with enough statistical power, multiple elements contribute to how LR relates to alcohol outcomes and reinforce the applicability of the model to both genders. Copyright © 2011 by the Research Society on Alcoholism.

  8. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  10. Development of a resource allocation formula for substance misuse treatment services.

    PubMed

    Jones, Andrew; Hayhurst, Karen P; Whittaker, Will; Mason, Thomas; Sutton, Matt

    2017-11-23

    Funding for substance misuse services comprises one-third of Public Health spend in England. The current allocation formula contains adjustments for actual activity, performance and need, proxied by the Standardized Mortality Ratio for under-75s (SMR < 75). Additional measures, such as deprivation, may better identify differential service need. We developed an age-standardized and an age-stratified model (over-18s, under-18s), with the outcome of expected/actual cost at postal sector/Local Authority level. A third, person-based model incorporated predictors of costs at the individual level. Each model incorporated both needs and supply variables, with the relative effects of their inclusion assessed. Mean estimated annual cost (2013/14) per English Local Authority area was £5 032 802 (sd: 3 951 158). Costs for drug misuse treatment represented the majority (83%) of costs. Models achieved adjusted R-squared values of 0.522 (age-standardized), 0.533 (age-stratified over-18s), 0.232 (age-stratified under-18s) and 0.470 (person-based). Improvements can be made to the existing resource allocation formulae to better reflect population need. The person-based model permits inclusion of a range of needs variables, in addition to strong predictors of cost based on the receipt of treatment in the previous year. Adoption of this revised person-based formula for substance misuse would shift resources towards more deprived areas. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  11. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  12. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    PubMed

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.

  13. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    PubMed Central

    2012-01-01

    Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945

  14. Description of historical crop calendar data bases developed to support foreign commodity production forecasting project experiments

    NASA Technical Reports Server (NTRS)

    West, W. L., III (Principal Investigator)

    1981-01-01

    The content, format, and storage of data bases developed for the Foreign Commodity Production Forecasting project and used to produce normal crop calendars are described. In addition, the data bases may be used for agricultural meteorology, modeling of stage sequences and planting dates, and as indicators of possible drought and famine.

  15. Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...

  16. Features of Wisdom: Prototypical Attributes of Wise People.

    ERIC Educational Resources Information Center

    Maciel, Anna G.; And Others

    Features of everyday conceptions of a "wise person" were examined, based on a model of wisdom-related knowledge (Baltes & Smith, 1990). The goal was to examine whether the psychological theory underlying this model is consistent with lay conceptions of wisdom, and whether everyday conceptions contain additional features not contained…

  17. Phase space effects on fast ion distribution function modeling in tokamaks

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.

    2016-05-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  18. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE Data Explorer

    White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-06-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  19. Additional strange hadrons from QCD thermodynamics and strangeness freezeout in heavy ion collisions.

    PubMed

    Bazavov, A; Ding, H-T; Hegde, P; Kaczmarek, O; Karsch, F; Laermann, E; Maezawa, Y; Mukherjee, Swagato; Ohno, H; Petreczky, P; Schmidt, C; Sharma, S; Soeldner, W; Wagner, M

    2014-08-15

    We compare lattice QCD results for appropriate combinations of net strangeness fluctuations and their correlations with net baryon number fluctuations with predictions from two hadron resonance gas (HRG) models having different strange hadron content. The conventionally used HRG model based on experimentally established strange hadrons fails to describe the lattice QCD results in the hadronic phase close to the QCD crossover. Supplementing the conventional HRG with additional, experimentally uncharted strange hadrons predicted by quark model calculations and observed in lattice QCD spectrum calculations leads to good descriptions of strange hadron thermodynamics below the QCD crossover. We show that the thermodynamic presence of these additional states gets imprinted in the yields of the ground-state strange hadrons leading to a systematic 5-8 MeV decrease of the chemical freeze-out temperatures of ground-state strange baryons.

  20. Additively Manufactured IN718 Components with Wirelessly Powered and Interrogated Embedded Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attridge, Paul; Bajekal, Sanjay; Klecka, Michael

    A methodology is described for embedding commercial-off-the-shelf sensors together with wireless communication and power circuit elements using direct laser metal sintered additively manufactured components. Physics based models of the additive manufacturing processes and sensor/wireless level performance models guided the design and embedment processes. A combination of cold spray deposition and laser engineered net shaping was used to fashion the transmitter/receiving elements and embed the sensors, thereby providing environmental protection and component robustness/survivability for harsh conditions. By design, this complement of analog and digital sensors were wirelessly powered and interrogated using a health and utilization monitoring system; enabling real-time, in situmore » prognostics and diagnostics.« less

  1. 77 FR 47854 - Proposed Information Collection Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-10

    ... Organizations, and Urban Indian Organizations. Competitive grants to non-profit organizations to provide home... consultation with evidence-based home visiting model developers and selected grantees and further refined based... addition to 56 jurisdictions and non-profit organizations, it is estimated that up to 25 Tribal MIECHV...

  2. Noise performance limits of advanced x-ray imagers employing poly-Si-based active pixel architectures

    NASA Astrophysics Data System (ADS)

    Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao

    2011-03-01

    A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.

  3. Finite element simulation and experimental verification of ultrasonic non-destructive inspection of defects in additively manufactured materials

    NASA Astrophysics Data System (ADS)

    Taheri, H.; Koester, L.; Bigelow, T.; Bond, L. J.

    2018-04-01

    Industrial applications of additively manufactured components are increasing quickly. Adequate quality control of the parts is necessary in ensuring safety when using these materials. Base material properties, surface conditions, as well as location and size of defects are some of the main targets for nondestructive evaluation of additively manufactured parts, and the problem of adequate characterization is compounded given the challenges of complex part geometry. Numerical modeling can allow the interplay of the various factors to be studied, which can lead to improved measurement design. This paper presents a finite element simulation verified by experimental results of ultrasonic waves scattering from flat bottom holes (FBH) in additive manufacturing materials. A focused beam immersion ultrasound transducer was used for both the modeling and simulations in the additive manufactured samples. The samples were SS17 4 PH steel samples made by laser sintering in a powder bed.

  4. Solvent-free nanofluid with three structure models based on the composition of a MWCNT/SiO2 core and its adsorption capacity of CO2

    NASA Astrophysics Data System (ADS)

    Yang, R. L.; Zheng, Y. P.; Wang, T. Y.; Li, P. P.; Wang, Y. D.; Yao, D. D.; Chen, L. X.

    2018-01-01

    A series of core/shell nanoparticle organic/inorganic hybrid materials (NOHMs) with different weight ratios of two components, consisting of multi-walled carbon nanotubes (MWCNTs) and silicon dioxide (SiO2) as the core were synthesized. The NOHMs display a liquid-like state in the absence of solvent at room temperature. Five NOHMs were categorized into three kinds of structure states based on different weight ratio of two components in the core, named the power strip model, the critical model and the collapse model. The capture capacities of these NOHMs for CO2 were investigated at 298 K and CO2 pressures ranging from 0 to 5 MPa. Compared with NOHMs having a neat MWCNT core, it was revealed that NOHMs with the power strip model show better adsorption capacity toward CO2 due to its lower viscosity and more reactive groups that can react with CO2. In addition, the capture capacities of NOHMs with the critical model were relatively worse than the neat MWCNT-based NOHM. The result is attributed to the aggregation of SiO2 in these samples, which may cause the consumption and hindrance of reactive groups. However, the capture capacity of NOHMs with the collapse model was the worst of all the NOHMs, owing to its lowest content of reactive groups and hollow structure in MWCNTs. In addition, they presented non-interference of MWCNTs and SiO2 without aggregation state.

  5. Health and economic impact of PHiD-CV in Canada and the UK: a Markov modelling exercise.

    PubMed

    Knerer, Gerhart; Ismaila, Afisi; Pearce, David

    2012-01-01

    The spectrum of diseases caused by Streptococcus pneumoniae and non-typeable Haemophilus influenzae (NTHi) represents a large burden on healthcare systems around the world. Meningitis, bacteraemia, community-acquired pneumonia (CAP), and acute otitis media (AOM) are vaccine-preventable infectious diseases that can have severe consequences. The health economic model presented here is intended to estimate the clinical and economic impact of vaccinating birth cohorts in Canada and the UK with the 10-valent, pneumococcal non-typeable Haemophilus influenzae protein D conjugate vaccine (PHiD-CV) compared with the newly licensed 13-valent pneumococcal conjugate vaccine (PCV-13). The model described herein is a Markov cohort model built to simulate the epidemiological burden of pneumococcal- and NTHi-related diseases within birth cohorts in the UK and Canada. Base-case assumptions include estimates of vaccine efficacy and NTHi infection rates that are based on published literature. The model predicts that the two vaccines will provide a broadly similar impact on all-cause invasive disease and CAP under base-case assumptions. However, PHiD-CV is expected to provide a substantially greater reduction in AOM compared with PCV-13, offering additional savings of Canadian $9.0 million and £4.9 million in discounted direct medical costs in Canada and the UK, respectively. The main limitations of the study are the difficulties in modelling indirect vaccine effects (herd effect and serotype replacement), the absence of PHiD-CV- and PCV-13-specific efficacy data and a lack of comprehensive NTHi surveillance data. Additional limitations relate to the fact that the transmission dynamics of pneumococcal serotypes have not been modelled, nor has antibiotic resistance been accounted for in this paper. This cost-effectiveness analysis suggests that, in Canada and the UK, PHiD-CV's potential to protect against NTHi infections could provide a greater impact on overall disease burden than the additional serotypes contained in PCV-13.

  6. Revisit of the Saito-Dresselhaus-Dresselhaus C2 ingestion model: on the mechanism of atomic-carbon-participated fullerene growth.

    PubMed

    Wang, Wei-Wei; Dang, Jing-Shuang; Zhao, Xiang; Nagase, Shigeru

    2017-11-09

    We introduce a mechanistic study based on a controversial fullerene bottom-up growth model proposed by R. Saito, G. Dresselhaus, and M. S. Dresselhaus. The so-called SDD C 2 addition model has been dismissed as chemically inadmissible but here we prove that it is feasible via successive atomic-carbon-participated addition and migration reactions. Kinetic calculations on the formation of isolated pentagon rule (IPR)-obeying C 70 and Y 3 N@C 80 are carried out by employing the SDD model for the first time. A stepwise mechanism is proposed with a considerably low barrier of ca. 2 eV which is about 3 eV lower than a conventional isomerization-containing fullerene growth pathway.

  7. Extending data worth methods to select multiple observations targeting specific hydrological predictions of interest

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, Troels N.; Ferré, Ty P. A.

    2016-04-01

    Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.

  8. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  9. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  10. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  11. Implementing Set Based Design into Department of Defense Acquisition

    DTIC Science & Technology

    2016-12-01

    challenges for the DOD. This report identifies the original SBD principles and characteristics based on Toyota Motor Corporation’s Set Based Concurrent...Engineering Model. Additionally, the team reviewed DOD case studies that implemented SBD. The SBD principles , along with the common themes from the...perennial challenges for the DOD. This report identifies the original SBD principles and characteristics based on Toyota Motor Corporation’s Set

  12. Performance Modeling of an Airborne Raman Water Vapor Lidar

    NASA Technical Reports Server (NTRS)

    Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.

    2000-01-01

    A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.

  13. Approximations to camera sensor noise

    NASA Astrophysics Data System (ADS)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  14. Atomistic modeling and simulation of the role of Be and Bi in Al diffusion in U-Mo fuel

    NASA Astrophysics Data System (ADS)

    Hofman, G. L.; Bozzolo, G.; Mosca, H. O.; Yacout, A. M.

    2011-07-01

    Within the RERTR program, previous experimental and modeling studies identified Si as the alloying addition to the Al cladding responsible for inhibiting Al interdiffusion in the UMo fuel. However, difficulties with reprocessing have rendered this choice inappropriate, leading to the need to study alternative elements. In this work, we discuss the results of an atomistic modeling effort which allows for the systematic study of several possible alloying additions. Based on the behavior observed in the phase diagrams, beryllium or bismuth additions suggest themselves as possible options to replace Si. The results of temperature-dependent simulations using the Bozzolo-Ferrante-Smith (BFS) method for the energetics for varying concentrations of either element are shown, indicating that Be could have a substantial effect in stopping Al interdiffusion, while Bi does not. Details of the calculations and the dependence of the role of each alloying addition as a function of temperature and concentration (of beryllium or bismuth in Al) are shown.

  15. Some considerations concerning the theory of combined toxicity: a case study of subchronic experimental intoxication with cadmium and lead.

    PubMed

    Varaksin, Anatoly N; Katsnelson, Boris A; Panov, Vladimir G; Privalova, Larisa I; Kireyeva, Ekaterina P; Valamina, Irene E; Beresneva, Olga Yu

    2014-02-01

    Rats were exposed intraperitoneally (3 times a week up to 20 injections) to either Cadmium and Lead salts in doses equivalent to their 0.05 LD50 separately or combined in the same or halved doses. Toxic effects were assessed by more than 40 functional, biochemical and morphometric indices. We analysed the results obtained aiming at determination of the type of combined toxicity using either common sense considerations based on descriptive statistics or two mathematical models based (a) on ANOVA and (b) on Mathematical Theory of Experimental Design, which correspond, respectively, to the widely recognised paradigms of effect additivity and dose additivity. Nevertheless, these approaches have led us unanimously to the following conclusions: (1) The above paradigms are virtually interchangeable and should be regarded as different methods of modelling the combined toxicity rather than as reflecting fundamentally differing processes. (2) Within both models there exist not merely three traditionally used types of combined toxicity (additivity, subadditivity and superadditivity) but at least 10 variants of it depending on exactly which effect is considered and on its level, as well as on the dose levels and their ratio. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Mycotoxins co-contamination: Methodological aspects and biological relevance of combined toxicity studies.

    PubMed

    Alassane-Kpembi, Imourana; Schatzmayr, Gerd; Taranu, Ionelia; Marin, Daniela; Puel, Olivier; Oswald, Isabelle Paule

    2017-11-02

    Mycotoxins are secondary fungal metabolites produced mainly by Aspergillus, Penicillium, and Fusarium. As evidenced by large-scale surveys, humans and animals are simultaneously exposed to several mycotoxins. Simultaneous exposure could result in synergistic, additive or antagonistic effects. However, most toxicity studies addressed the effects of mycotoxins separately. We present the experimental designs and we discuss the conclusions drawn from in vitro experiments exploring toxicological interactions of mycotoxins. We report more than 80 publications related to mycotoxin interactions. The studies explored combinations involving the regulated groups of mycotoxins, especially aflatoxins, ochratoxins, fumonisins, zearalenone and trichothecenes, but also the "emerging" mycotoxins beauvericin and enniatins. Over 50 publications are based on the arithmetic model of additivity. Few studies used the factorial designs or the theoretical biology-based models of additivity. The latter approaches are gaining increased attention. These analyses allow determination of the type of interaction and, optionally, its magnitude. The type of interaction reported for mycotoxin combinations depended on several factors, in particular cell models and the tested dose ranges. However, synergy among Fusarium toxins was highlighted in several studies. This review indicates that well-addressed in vitro studies remain valuable tools for the screening of interactive potential in mycotoxin mixtures.

  17. Event-based design tool for construction site erosion and sediment controls

    NASA Astrophysics Data System (ADS)

    Trenouth, William R.; Gharabaghi, Bahram

    2015-09-01

    This paper provides additional discussion surrounding the novel event-based soil loss models developed by Trenouth and Gharabaghi (2015) for the design of erosion and sediment controls (ESCs) for various phases of construction - from pre-development to post-development conditions. The datasets for the study were obtained from three Ontario sites - Greensborough, Cookstown, and Alcona - in addition to datasets mined from the literature for three additional sites - Treynor, Iowa, Coshocton, Ohio and Cordoba, Spain. Model performances were evaluated for each of the study sites, and quantified using commonly-reported statistics. This work is nested within a broader conceptual framework, which includes the estimation of ambient receiving water quality, the prediction of event mean runoff quality for a given design storm, and the calculation of the required level of protection using adequate ESCs to meet receiving water quality guidelines. These models allow design engineers and regulatory agencies to assess the potential risk of ecological damage to receiving waters due to inadequate soil erosion and sediment control practices using dynamic scenario forecasting when considering rapidly changing land use conditions during various phases of construction, typically for a 2- or 5-year design storm return period.

  18. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  19. Additive Manufacturing of IN100 Superalloy Through Scanning Laser Epitaxy for Turbine Engine Hot-Section Component Repair: Process Development, Modeling, Microstructural Characterization, and Process Control

    NASA Astrophysics Data System (ADS)

    Acharya, Ranadip; Das, Suman

    2015-09-01

    This article describes additive manufacturing (AM) of IN100, a high gamma-prime nickel-based superalloy, through scanning laser epitaxy (SLE), aimed at the creation of thick deposits onto like-chemistry substrates for enabling repair of turbine engine hot-section components. SLE is a metal powder bed-based laser AM technology developed for nickel-base superalloys with equiaxed, directionally solidified, and single-crystal microstructural morphologies. Here, we combine process modeling, statistical design-of-experiments (DoE), and microstructural characterization to demonstrate fully metallurgically bonded, crack-free and dense deposits exceeding 1000 μm of SLE-processed IN100 powder onto IN100 cast substrates produced in a single pass. A combined thermal-fluid flow-solidification model of the SLE process compliments DoE-based process development. A customized quantitative metallography technique analyzes digital cross-sectional micrographs and extracts various microstructural parameters, enabling process model validation and process parameter optimization. Microindentation measurements show an increase in the hardness by 10 pct in the deposit region compared to the cast substrate due to microstructural refinement. The results illustrate one of the very few successes reported for the crack-free deposition of IN100, a notoriously "non-weldable" hot-section alloy, thus establishing the potential of SLE as an AM method suitable for hot-section component repair and for future new-make components in high gamma-prime containing crack-prone nickel-based superalloys.

  20. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  1. [Development method of healthcare information system integration based on business collaboration model].

    PubMed

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  2. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  3. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  4. Formal verification of software-based medical devices considering medical guidelines.

    PubMed

    Daw, Zamira; Cleaveland, Rance; Vetter, Marcus

    2014-01-01

    Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.

  5. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers

    PubMed Central

    Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.

    2018-01-01

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092

  6. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.

    PubMed

    Jiang, Yong; Schmidt, Renate H; Reif, Jochen C

    2018-05-04

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.

  7. A review of the calculation procedure for critical acid loads for terrestrial ecosystems.

    PubMed

    van der Salm, C; de Vries, W

    2001-04-23

    Target loads for acid deposition in the Netherlands, as formulated in the Dutch environmental policy plan, are based on critical load calculations at the end of the 1980s. Since then knowledge on the effect of acid deposition on terrestrial ecosystems has substantially increased. In the early 1990s a simple mass balance model was developed to calculate critical loads. This model was evaluated and the methods were adapted to represent the current knowledge. The main changes in the model are the use of actual empirical relationships between Al and H concentrations in the soil solution, the addition of a constant base saturation as a second criterion for soil quality and the use of tree species-dependant critical Al/base cation (BC) ratios for Dutch circumstances. The changes in the model parameterisation and in the Al/BC criteria led to considerably (50%) higher critical loads for root damage. The addition of a second criterion in the critical load calculations for soil quality caused a decrease in the critical loads for soils with a median to high base saturation such as loess and clay soils. The adaptation hardly effected the median critical load for soil quality in the Netherlands, since only 15% of the Dutch forests occur on these soils. On a regional scale, however, critical loads were (much) lower in areas where those soils are located.

  8. Turbulent Chemical Interaction Models in NCC: Comparison

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Liu, Nan-Suey

    2006-01-01

    The performance of a scalar PDF hydrogen-air combustion model in predicting a complex reacting flow is evaluated. In addition the results are compared to those obtained by running the same case with the so-called laminar chemistry model and also a new model based on the concept of mapping partially stirred reactor data onto perfectly stirred reactor data. The results show that the scalar PDF model produces significantly different results from the other two models, and at a significantly higher computational cost.

  9. Prediction of global and local model quality in CASP8 using the ModFOLD server.

    PubMed

    McGuffin, Liam J

    2009-01-01

    The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.

  10. Modeling of delays in PKPD: classical approaches and a tutorial for delay differential equations.

    PubMed

    Koch, Gilbert; Krzyzanski, Wojciech; Pérez-Ruixo, Juan Jose; Schropp, Johannes

    2014-08-01

    In pharmacokinetics/pharmacodynamics (PKPD) the measured response is often delayed relative to drug administration, individuals in a population have a certain lifespan until they maturate or the change of biomarkers does not immediately affects the primary endpoint. The classical approach in PKPD is to apply transit compartment models (TCM) based on ordinary differential equations to handle such delays. However, an alternative approach to deal with delays are delay differential equations (DDE). DDEs feature additional flexibility and properties, realize more complex dynamics and can complementary be used together with TCMs. We introduce several delay based PKPD models and investigate mathematical properties of general DDE based models, which serve as subunits in order to build larger PKPD models. Finally, we review current PKPD software with respect to the implementation of DDEs for PKPD analysis.

  11. Reducing ambulance response times using discrete event simulation.

    PubMed

    Wei Lam, Sean Shao; Zhang, Zhong Cheng; Oh, Hong Choon; Ng, Yih Ying; Wah, Win; Hock Ong, Marcus Eng

    2014-01-01

    The objectives of this study are to develop a discrete-event simulation (DES) model for the Singapore Emergency Medical Services (EMS), and to demonstrate the utility of this DES model for the evaluation of different policy alternatives to improve ambulance response times. A DES model was developed based on retrospective emergency call data over a continuous 6-month period in Singapore. The main outcome measure is the distribution of response times. The secondary outcome measure is ambulance utilization levels based on unit hour utilization (UHU) ratios. The DES model was used to evaluate different policy options in order to improve the response times, while maintaining reasonable fleet utilization. Three policy alternatives looking at the reallocation of ambulances, the addition of new ambulances, and alternative dispatch policies were evaluated. Modifications of dispatch policy combined with the reallocation of existing ambulances were able to achieve response time performance equivalent to that of adding 10 ambulances. The median (90th percentile) response time was 7.08 minutes (12.69 minutes). Overall, this combined strategy managed to narrow the gap between the ideal and existing response time distribution by 11-13%. Furthermore, the median UHU under this combined strategy was 0.324 with an interquartile range (IQR) of 0.047 versus a median utilization of 0.285 (IQR of 0.051) resulting from the introduction of additional ambulances. Response times were shown to be improved via a more effective reallocation of ambulances and dispatch policy. More importantly, the response time improvements were achieved without a reduction in the utilization levels and additional costs associated with the addition of ambulances. We demonstrated the effective use of DES as a versatile platform to model the dynamic system complexities of Singapore's national EMS systems for the evaluation of operational strategies to improve ambulance response times.

  12. An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    Crucial to an efficient aircraft simulation-based design is a robust data modeling methodology for both recording the information and providing data transfer readily and reliably. To meet this goal, data modeling issues involved in the aircraft multidisciplinary design are first analyzed in this study. Next, an XML-based. extensible data object model for multidisciplinary aircraft design is constructed and implemented. The implementation of the model through aircraft databinding allows the design applications to access and manipulate any disciplinary data with a lightweight and easy-to-use API. In addition, language independent representation of aircraft disciplinary data in the model fosters interoperability amongst heterogeneous systems thereby facilitating data sharing and exchange between various design tools and systems.

  13. Modeling an integrative physical examination program for the Departments of Defense and Veterans Affairs.

    PubMed

    Goodrich, Scott G

    2006-10-01

    Current policies governing the Departments of Defense and Veterans Affairs physical examination programs are out of step with current evidence-based medical practice. Replacing periodic and other routine physical examination types with annual preventive health assessments would afford our service members additional health benefit at reduced cost. Additionally, the Departments of Defense and Veterans Affairs repeat the physical examination process at separation and have been unable to reconcile their respective disability evaluation systems to reduce duplication and waste. A clear, coherent, and coordinated strategy to improve the relevance and utility of our physical examination programs is long overdue. This article discusses existing physical examination programs and proposes a model for a new integrative physical examination program based on need, science, and common sense.

  14. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  15. Research on manufacturing service behavior modeling based on block chain theory

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu

    2018-04-01

    According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.

  16. Eukaryotic major facilitator superfamily transporter modeling based on the prokaryotic GlpT crystal structure.

    PubMed

    Lemieux, M Joanne

    2007-01-01

    The major facilitator superfamily (MFS) of transporters represents the largest family of secondary active transporters and has a diverse range of substrates. With structural information for four MFS transporters, we can see a strong structural commonality suggesting, as predicted, a common architecture for MFS transporters. The rate for crystal structure determination of MFS transporters is slow, making modeling of both prokaryotic and eukaryotic transporters more enticing. In this review, models of eukaryotic transporters Glut1, G6PT, OCT1, OCT2 and Pho84, based on the crystal structures of the prokaryotic GlpT, based on the crystal structure of LacY are discussed. The techniques used to generate the different models are compared. In addition, the validity of these models and the strategy of using prokaryotic crystal structures to model eukaryotic proteins are discussed. For comparison, E. coli GlpT was modeled based on the E. coli LacY structure and compared to the crystal structure of GlpT demonstrating that experimental evidence is essential for accurate modeling of membrane proteins.

  17. Acute Toxicity of Ternary Cd-Cu-Ni and Cd-Ni-Zn Mixtures to Daphnia magna: Dominant Metal Pairs Change along a Concentration Gradient.

    PubMed

    Traudt, Elizabeth M; Ranville, James F; Meyer, Joseph S

    2017-04-18

    Multiple metals are usually present in surface waters, sometimes leading to toxicity that currently is difficult to predict due to potentially non-additive mixture toxicity. Previous toxicity tests with Daphnia magna exposed to binary mixtures of Ni combined with Cd, Cu, or Zn demonstrated that Ni and Zn strongly protect against Cd toxicity, but Cu-Ni toxicity is more than additive, and Ni-Zn toxicity is slightly less than additive. To consider multiple metal-metal interactions, we exposed D. magna neonates to Cd, Cu, Ni, or Zn alone and in ternary Cd-Cu-Ni and Cd-Ni-Zn combinations in standard 48 h lethality tests. In these ternary mixtures, two metals were held constant, while the third metal was varied through a series that ranged from nonlethal to lethal concentrations. In Cd-Cu-Ni mixtures, the toxicity was less than additive, additive, or more than additive, depending on the concentration (or ion activity) of the varied metal and the additivity model (concentration-addition or independent-action) used to predict toxicity. In Cd-Ni-Zn mixtures, the toxicity was less than additive or approximately additive, depending on the concentration (or ion activity) of the varied metal but independent of the additivity model. These results demonstrate that complex interactions of potentially competing toxicity-controlling mechanisms can occur in ternary-metal mixtures but might be predicted by mechanistic bioavailability-based toxicity models.

  18. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  19. A hybrid computational model to explore the topological characteristics of epithelial tissues.

    PubMed

    González-Valverde, Ismael; García-Aznar, José Manuel

    2017-11-01

    Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Genomewide association study for susceptibility genes contributing to familial Parkinson disease

    PubMed Central

    Pankratz, Nathan; Wilk, Jemma B.; Latourelle, Jeanne C.; DeStefano, Anita L.; Halter, Cheryl; Pugh, Elizabeth W.; Doheny, Kimberly F.; Gusella, James F.; Nichols, William C.

    2009-01-01

    Five genes have been identified that contribute to Mendelian forms of Parkinson disease (PD); however, mutations have been found in fewer than 5% of patients, suggesting that additional genes contribute to disease risk. Unlike previous studies that focused primarily on sporadic PD, we have performed the first genomewide association study (GWAS) in familial PD. Genotyping was performed with the Illumina HumanCNV370Duo array in 857 familial PD cases and 867 controls. A logistic model was employed to test for association under additive and recessive modes of inheritance after adjusting for gender and age. No result met genomewide significance based on a conservative Bonferroni correction. The strongest association result was with SNPs in the GAK/DGKQ region on chromosome 4 (additive model: p = 3.4 × 10−6; OR = 1.69). Consistent evidence of association was also observed to the chromosomal regions containing SNCA (additive model: p = 5.5 × 10−5; OR = 1.35) and MAPT (recessive model: p = 2.0 × 10−5; OR = 0.56). Both of these genes have been implicated previously in PD susceptibility; however, neither was identified in previous GWAS studies of PD. Meta-analysis was performed using data from a previous case–control GWAS, and yielded improved p values for several regions, including GAK/DGKQ (additive model: p = 2.5 × 10−7) and the MAPT region (recessive model: p = 9.8 × 10−6; additive model: p = 4.8 × 10−5). These data suggest the identification of new susceptibility alleles for PD in the GAK/DGKQ region, and also provide further support for the role of SNCA and MAPT in PD susceptibility. PMID:18985386

  1. Pilot of a Learning Management System to Enhance Counselors' Relational Qualities through Mindfulness-Based Practices

    ERIC Educational Resources Information Center

    Ballinger, Julie Ann

    2013-01-01

    Mindfulness-based practices are associated with increased attentional qualities, improved self-focus styles, enhanced empathic understanding, and strengthened self-compassion, making these practices a viable addition to counselor training programs. However, current mindfulness training models are primarily designed for relief of psychological…

  2. IMPROVED VALUATION OF ECOLOGICAL BENEFITS ASSOCIATED WITH AQUATIC LIVING RESOURCES: DEVELOPMENT AND TESTING OF INDICATOR-BASED STATED PREFERENCE VALUATION AND TRANSFER

    EPA Science Inventory

    In addition to development and systematic qualitative/quantitative testing of indicator-based valuation for aquatic living resources, the proposed work will improve interdisciplinary mechanisms to model and communicate aquatic ecosystem change within SP valuation—an area...

  3. Benchmarking LSM root-zone soil mositure predictions using satellite-based vegetation indices

    USDA-ARS?s Scientific Manuscript database

    The application of modern land surface models (LSMs) to agricultural drought monitoring is based on the premise that anomalies in LSM root-zone soil moisture estimates can accurately anticipate the subsequent impact of drought on vegetation productivity and health. In addition, the water and energy ...

  4. Can model-free reinforcement learning explain deontological moral judgments?

    PubMed

    Ayars, Alisabeth

    2016-05-01

    Dual-systems frameworks propose that moral judgments are derived from both an immediate emotional response, and controlled/rational cognition. Recently Cushman (2013) proposed a new dual-system theory based on model-free and model-based reinforcement learning. Model-free learning attaches values to actions based on their history of reward and punishment, and explains some deontological, non-utilitarian judgments. Model-based learning involves the construction of a causal model of the world and allows for far-sighted planning; this form of learning fits well with utilitarian considerations that seek to maximize certain kinds of outcomes. I present three concerns regarding the use of model-free reinforcement learning to explain deontological moral judgment. First, many actions that humans find aversive from model-free learning are not judged to be morally wrong. Moral judgment must require something in addition to model-free learning. Second, there is a dearth of evidence for central predictions of the reinforcement account-e.g., that people with different reinforcement histories will, all else equal, make different moral judgments. Finally, to account for the effect of intention within the framework requires certain assumptions which lack support. These challenges are reasonable foci for future empirical/theoretical work on the model-free/model-based framework. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Risk prediction models for selection of lung cancer screening candidates: A retrospective validation study

    PubMed Central

    ten Haaf, Kevin; Tammemägi, Martin C.; Han, Summer S.; Kong, Chung Yin; Plevritis, Sylvia K.; de Koning, Harry J.; Steyerberg, Ewout W.

    2017-01-01

    Background Selection of candidates for lung cancer screening based on individual risk has been proposed as an alternative to criteria based on age and cumulative smoking exposure (pack-years). Nine previously established risk models were assessed for their ability to identify those most likely to develop or die from lung cancer. All models considered age and various aspects of smoking exposure (smoking status, smoking duration, cigarettes per day, pack-years smoked, time since smoking cessation) as risk predictors. In addition, some models considered factors such as gender, race, ethnicity, education, body mass index, chronic obstructive pulmonary disease, emphysema, personal history of cancer, personal history of pneumonia, and family history of lung cancer. Methods and findings Retrospective analyses were performed on 53,452 National Lung Screening Trial (NLST) participants (1,925 lung cancer cases and 884 lung cancer deaths) and 80,672 Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) ever-smoking participants (1,463 lung cancer cases and 915 lung cancer deaths). Six-year lung cancer incidence and mortality risk predictions were assessed for (1) calibration (graphically) by comparing the agreement between the predicted and the observed risks, (2) discrimination (area under the receiver operating characteristic curve [AUC]) between individuals with and without lung cancer (death), and (3) clinical usefulness (net benefit in decision curve analysis) by identifying risk thresholds at which applying risk-based eligibility would improve lung cancer screening efficacy. To further assess performance, risk model sensitivities and specificities in the PLCO were compared to those based on the NLST eligibility criteria. Calibration was satisfactory, but discrimination ranged widely (AUCs from 0.61 to 0.81). The models outperformed the NLST eligibility criteria over a substantial range of risk thresholds in decision curve analysis, with a higher sensitivity for all models and a slightly higher specificity for some models. The PLCOm2012, Bach, and Two-Stage Clonal Expansion incidence models had the best overall performance, with AUCs >0.68 in the NLST and >0.77 in the PLCO. These three models had the highest sensitivity and specificity for predicting 6-y lung cancer incidence in the PLCO chest radiography arm, with sensitivities >79.8% and specificities >62.3%. In contrast, the NLST eligibility criteria yielded a sensitivity of 71.4% and a specificity of 62.2%. Limitations of this study include the lack of identification of optimal risk thresholds, as this requires additional information on the long-term benefits (e.g., life-years gained and mortality reduction) and harms (e.g., overdiagnosis) of risk-based screening strategies using these models. In addition, information on some predictor variables included in the risk prediction models was not available. Conclusions Selection of individuals for lung cancer screening using individual risk is superior to selection criteria based on age and pack-years alone. The benefits, harms, and feasibility of implementing lung cancer screening policies based on risk prediction models should be assessed and compared with those of current recommendations. PMID:28376113

  6. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models.

    PubMed

    Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco

    2018-01-01

    Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.

  7. Structural equation model of the relationships among inquiry-based instruction, attitudes toward science, achievement in science, and gender

    NASA Astrophysics Data System (ADS)

    Wallace, Stephen R.

    The purpose of this study was to clarify the muddled state of the magnitude and direction of the relationships among inquiry-based instruction, attitudes toward science, and science achievement, as students progressed from middle school into high school. The problem under investigation was two-fold. The first was to create and test a structural equation model describing the direction and magnitude of the relationships. The second was to determine gender differences in the relationships. Data collected from the Longitudinal Study of American Youth (LSAY) over a three-year period were used to create and test the structural equation model. Results of this study indicate inquiry-based instruction is effective in positively influencing 7th- and 8th-grade students' understandings of science concepts. Additionally, inquiry-based instruction does not have an adverse influence on science achievement in 9th grade. If the primary goal is science achievement, then an inquiry-based approach to instruction is effective. On the other hand, if the primary goal of science instruction is to positively influence students' attitudes toward science (in particular, perceptions of the usefulness of science) then inquiry-based approaches may not be the most effective method of instruction. Inquiry-based instruction adversely influences 7th-grade males' attitudes toward science and has no significant influence on 7th-grade females' attitudes toward science. In 8th grade, inquiry-based instruction has no significant influence on either genders' attitudes toward science. Not until the 9th grade does inquiry-based instruction have a significantly positive influence on males' and females' perceptions of the usefulness of science. Additionally, prior attitudes toward science significantly influences science achievement only in 8th grade and science achievement influences attitudes toward science only in 9th grade. Recommendations for further research are based on the findings and limitations of this study. Methodological concerns and recommendations focus primarily on limitations in the design of this study and the use of large-scale databases. Theoretical concerns focus on recommendations for areas of additional research; principally, they are based on theoretical questions arising out of this study.

  8. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.

  9. Evaluating models of remember-know judgments: complexity, mimicry, and discriminability.

    PubMed

    Cohen, Andrew L; Rotello, Caren M; Macmillan, Neil A

    2008-10-01

    Remember-know judgments provide additional information in recognition memory tests, but the nature of this information and the attendant decision process are in dispute. Competing models have proposed that remember judgments reflect a sum of familiarity and recollective information (the one-dimensional model), are based on a difference between these strengths (STREAK), or are purely recollective (the dual-process model). A choice among these accounts is sometimes made by comparing the precision of their fits to data, but this strategy may be muddied by differences in model complexity: Some models that appear to provide good fits may simply be better able to mimic the data produced by other models. To evaluate this possibility, we simulated data with each of the models in each of three popular remember-know paradigms, then fit those data to each of the models. We found that the one-dimensional model is generally less complex than the others, but despite this handicap, it dominates the others as the best-fitting model. For both reasons, the one-dimensional model should be preferred. In addition, we found that some empirical paradigms are ill-suited for distinguishing among models. For example, data collected by soliciting remember/know/new judgments--that is, the trinary task--provide a particularly weak ground for distinguishing models. Additional tables and figures may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, at www.psychonomic.org/archive.

  10. Model-Based GN and C Simulation and Flight Software Development for Orion Missions beyond LEO

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan; Milenkovic, Zoran; Henry, Joel; Buttacoli, Michael

    2014-01-01

    For Orion missions beyond low Earth orbit (LEO), the Guidance, Navigation, and Control (GN&C) system is being developed using a model-based approach for simulation and flight software. Lessons learned from the development of GN&C algorithms and flight software for the Orion Exploration Flight Test One (EFT-1) vehicle have been applied to the development of further capabilities for Orion GN&C beyond EFT-1. Continuing the use of a Model-Based Development (MBD) approach with the Matlab®/Simulink® tool suite, the process for GN&C development and analysis has been largely improved. Furthermore, a model-based simulation environment in Simulink, rather than an external C-based simulation, greatly eases the process for development of flight algorithms. The benefits seen by employing lessons learned from EFT-1 are described, as well as the approach for implementing additional MBD techniques. Also detailed are the key enablers for improvements to the MBD process, including enhanced configuration management techniques for model-based software systems, automated code and artifact generation, and automated testing and integration.

  11. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not totally satisfactory. With a model for the additional term in the momentum equation, the predictions of the constant-coefficient Smagorinsky and constant-coefficient Scale-Similarity models were improved to a certain extent; however, most of the improvement was obtained for the Gradient model. The previously derived model and a newly developed model for the additional term in the momentum equation were both tested, with the new model proving even more successful than the previous model at reproducing the high density-gradient magnitude regions. Several dynamic SGS-flux models, in which the SGS-flux model coefficient is computed as part of the simulation, were tested in conjunction with the new model for this additional term in the momentum equation. The most successful dynamic model was a "mixed" model combining the Smagorinsky and Gradient models. This work is directly applicable to simulations of gas turbine engines (aeronautics) and rocket engines (astronautics).

  12. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  13. Investigation into stability of poly(vinyl alcohol)-based Opadry® II films.

    PubMed

    Koo, Otilia M Y; Fiske, John D; Yang, Haitao; Nikfar, Faranak; Thakur, Ajit; Scheer, Barry; Adams, Monica L

    2011-06-01

    Poly(vinyl alcohol) (PVA)-based formulations are used for pharmaceutical tablet coating with numerous advantages. Our objective is to study the stability of PVA-based coating films in the presence of acidic additives, alkaline additives, and various common impurities typically found in tablet formulations. Opadry® II 85F was used as the model PVA-based coating formulation. The additives and impurities were incorporated into the polymer suspension prior to film casting. Control and test films were analyzed before and after exposure to 40°C/75% relative humidity. Tests included film disintegration, size-exclusion chromatography, thermal analysis, and microscopy. Under stressed conditions, acidic additives (hydrochloric acid (HCl) and ammonium bisulfate (NH(4)HSO(4))) negatively impacted Opadry® II 85F film disintegration while NaOH, formaldehyde, and peroxide did not. Absence of PVA species from the disintegration media corresponded to an increase in crystallinity of PVA for reacted films containing HCl. Films with NH(4)HSO(4) exhibited slower rate of reactivity and less elevation in melting temperature with no clear change in melting enthalpy. Acidic additives posed greater risk of compromise in disintegration of PVA-based coatings than alkaline or common impurities. The mechanism of acid-induced reactivity due to the presence of acidic salts (HCl vs. NH(4)HSO(4)) may be different.

  14. Development of an integrated generic model for multi-scale assessment of the impacts of agro-ecosystems on major ecosystem services in West Africa.

    PubMed

    Belem, Mahamadou; Saqalli, Mehdi

    2017-11-01

    This paper presents an integrated model assessing the impacts of climate change, agro-ecosystem and demographic transition patterns on major ecosystem services in West-Africa along a partial overview of economic aspects (poverty reduction, food self-sufficiency and income generation). The model is based on an agent-based model associated with a soil model and multi-scale spatial model. The resulting Model for West-Africa Agro-Ecosystem Integrated Assessment (MOWASIA) is ecologically generic, meaning it is designed for all sudano-sahelian environments but may then be used as an experimentation facility for testing different scenarios combining ecological and socioeconomic dimensions. A case study in Burkina Faso is examined to assess the environmental and economic performances of semi-continuous and continuous farming systems. Results show that the semi-continuous system using organic fertilizer and fallowing practices contribute better to environment preservation and food security than the more economically performant continuous system. In addition, this study showed that farmers heterogeneity could play an important role in agricultural policies planning and assessment. In addition, the results showed that MOWASIA is an effective tool for designing, analysing the impacts of agro-ecosystems. Copyright © 2017. Published by Elsevier Ltd.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  16. Evaluation of 3D Additively Manufactured Canine Brain Models for Teaching Veterinary Neuroanatomy.

    PubMed

    Schoenfeld-Tacher, Regina M; Horn, Timothy J; Scheviak, Tyler A; Royal, Kenneth D; Hudson, Lola C

    Physical specimens are essential to the teaching of veterinary anatomy. While fresh and fixed cadavers have long been the medium of choice, plastinated specimens have gained widespread acceptance as adjuncts to dissection materials. Even though the plastination process increases the durability of specimens, these are still derived from animal tissues and require periodic replacement if used by students on a regular basis. This study investigated the use of three-dimensional additively manufactured (3D AM) models (colloquially referred to as 3D-printed models) of the canine brain as a replacement for plastinated or formalin-fixed brains. The models investigated were built based on a micro-MRI of a single canine brain and have numerous practical advantages, such as durability, lower cost over time, and reduction of animal use. The effectiveness of the models was assessed by comparing performance among students who were instructed using either plastinated brains or 3D AM models. This study used propensity score matching to generate similar pairs of students. Pairings were based on gender and initial anatomy performance across two consecutive classes of first-year veterinary students. Students' performance on a practical neuroanatomy exam was compared, and no significant differences were found in scores based on the type of material (3D AM models or plastinated specimens) used for instruction. Students in both groups were equally able to identify neuroanatomical structures on cadaveric material, as well as respond to questions involving application of neuroanatomy knowledge. Therefore, we postulate that 3D AM canine brain models are an acceptable alternative to plastinated specimens in teaching veterinary neuroanatomy.

  17. Influence of Alveolar Bone Defects on the Stress Distribution in Quad Zygomatic Implant-Supported Maxillary Prosthesis.

    PubMed

    Duan, Yuanyuan; Chandran, Ravi; Cherry, Denise

    The purpose of this study was to create three-dimensional composite models of quad zygomatic implant-supported maxillary prostheses with a variety of alveolar bone defects around implant sites, and to investigate the stress distribution in the surrounding bone using the finite element analysis (FEA) method. Three-dimensional models of titanium zygomatic implants, maxillary prostheses, and human skulls were created and assembled using Mimics based on microcomputed tomography and cone beam computed tomography images. A variety of additional bone defects were created at the locations of four zygomatic implants to simulate multiple clinical scenarios. The volume meshes were created and exported into FEA software. Material properties were assigned respectively for all the structures, and von Mises stress data were collected and plotted in the postprocessing module. The maximum stress in the surrounding bone was located in the crestal bone around zygomatic implants. The maximum stress in the prostheses was located at the angled area of the implant-abutment connection. The model with anterior defects had a higher peak stress value than the model with posterior defects. All the models with additional bone defects had higher maximum stress values than the control model without additional bone loss. Additional alveolar bone loss has a negative influence on the stress concentration in the surrounding bone of quad zygomatic implant-supported prostheses. More care should be taken if these additional bone defects are at the sites of anterior zygomatic implants.

  18. Simulation of Blast Loading on an Ultrastructurally-based Computational Model of the Ocular Lens

    DTIC Science & Technology

    2016-12-01

    organelles. Additionally, the cell membranes demonstrated the classic ball-and-socket loops . For the SEM images, they were placed in two fixatives and mounted...considered (fibrous network and matrix), both components are modelled using a hyper - elastic framework, and the resulting constitutive model is embedded in a...within the framework of hyper - elasticity). Full details on the linearization procedures that were adopted in these previous models or the convergence

  19. Transactions in domain-specific information systems

    NASA Astrophysics Data System (ADS)

    Zacek, Jaroslav

    2017-07-01

    Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.

  20. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  1. Mechanical properties of multifunctional structure with viscoelastic components based on FVE model

    NASA Astrophysics Data System (ADS)

    Hao, Dong; Zhang, Lin; Yu, Jing; Mao, Daiyong

    2018-02-01

    Based on the models of Lion and Kardelky (2004) and Hofer and Lion (2009), a finite viscoelastic (FVE) constitutive model, considering the predeformation-, frequency- and amplitude-dependent properties, has been proposed in our earlier paper [1]. FVE model is applied to investigating the dynamic characteristics of the multifunctional structure with the viscoelastic components. Combing FVE model with the finite element theory, the dynamic model of the multifunctional structure could be obtained. Additionally, the parametric identification and the experimental verification are also given via the frequency-sweep tests. The results show that the computational data agree well with the experimental data. FVE model has made a success of expressing the dynamic characteristics of the viscoelastic materials utilized in the multifunctional structure. The multifunctional structure technology has been verified by in-orbit experiments.

  2. Additivity and Interactions in Ecotoxicity of Pollutant Mixtures: Some Patterns, Conclusions, and Open Questions

    PubMed Central

    Rodea-Palomares, Ismael; González-Pleiter, Miguel; Martín-Betancor, Keila; Rosal, Roberto; Fernández-Piñas, Francisca

    2015-01-01

    Understanding the effects of exposure to chemical mixtures is a common goal of pharmacology and ecotoxicology. In risk assessment-oriented ecotoxicology, defining the scope of application of additivity models has received utmost attention in the last 20 years, since they potentially allow one to predict the effect of any chemical mixture relying on individual chemical information only. The gold standard for additivity in ecotoxicology has demonstrated to be Loewe additivity which originated the so-called Concentration Addition (CA) additivity model. In pharmacology, the search for interactions or deviations from additivity (synergism and antagonism) has similarly captured the attention of researchers over the last 20 years and has resulted in the definition and application of the Combination Index (CI) Theorem. CI is based on Loewe additivity, but focused on the identification and quantification of synergism and antagonism. Despite additive models demonstrating a surprisingly good predictive power in chemical mixture risk assessment, concerns still exist due to the occurrence of unpredictable synergism or antagonism in certain experimental situations. In the present work, we summarize the parallel history of development of CA, IA, and CI models. We also summarize the applicability of these concepts in ecotoxicology and how their information may be integrated, as well as the possibility of prediction of synergism. Inside the box, the main question remaining is whether it is worthy to consider departures from additivity in mixture risk assessment and how to predict interactions among certain mixture components. Outside the box, the main question is whether the results observed under the experimental constraints imposed by fractional approaches are a de fide reflection of what it would be expected from chemical mixtures in real world circumstances. PMID:29051468

  3. 12 CFR Appendix H to Part 222 - Appendix H-Model Forms for Risk-Based Pricing and Credit Score Disclosure Exception Notices

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... addresses that may change over time. ii. The addition of graphics or icons, such as the person's corporate... rate. All forms contained in this appendix are models; their use is optional. 3. A person may change... required to conduct consumer testing when rearranging the format of the model forms. a. Acceptable changes...

  4. Incorporating Resilience into Dynamic Social Models

    DTIC Science & Technology

    2016-07-20

    solved by simply using the information provided by the scenario. Instead, additional knowledge is required from relevant fields that study these...resilience function by leveraging Bayesian Knowledge Bases (BKBs), a probabilistic reasoning network framework[5],[6]. BKBs allow for inferencing...reasoning network framework based on Bayesian Knowledge Bases (BKBs). BKBs are central to our social resilience framework as they are used to

  5. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how GCR event rates mapped to biological signaling induction and relaxation times. We considered several hypotheses related to signaling and cancer risk, and then performed simulations for conditions where aberrant or adaptive signaling would occur on long-duration space mission. Our results do not support the conventional assumptions of dose, linearity and additivity. A discussion on how event-based systems biology models, which focus on biological signaling as the mechanism to propagate damage or adaptation, can be further developed for cancer and CNS space radiation risk projections is given.

  6. Using historical and projected future climate model simulations as drivers of agricultural and biological models (Invited)

    NASA Astrophysics Data System (ADS)

    Stefanova, L. B.

    2013-12-01

    Climate model evaluation is frequently performed as a first step in analyzing climate change simulations. Atmospheric scientists are accustomed to evaluating climate models through the assessment of model climatology and biases, the models' representation of large-scale modes of variability (such as ENSO, PDO, AMO, etc) and the relationship between these modes and local variability (e.g. the connection between ENSO and the wintertime precipitation in the Southeast US). While these provide valuable information about the fidelity of historical and projected climate model simulations from an atmospheric scientist's point of view, the application of climate model data to fields such as agriculture, ecology and biology may require additional analyses focused on the particular application's requirements and sensitivities. Typically, historical climate simulations are used to determine a mapping between the model and observed climate, either through a simple (additive for temperature or multiplicative for precipitation) or a more sophisticated (such as quantile matching) bias correction on a monthly or seasonal time scale. Plants, animals and humans however are not directly affected by monthly or seasonal means. To assess the impact of projected climate change on living organisms and related industries (e.g. agriculture, forestry, conservation, utilities, etc.), derivative measures such as the heating degree-days (HDD), cooling degree-days (CDD), growing degree-days (GDD), accumulated chill hours (ACH), wet season onset (WSO) and duration (WSD), among others, are frequently useful. We will present a comparison of the projected changes in such derivative measures calculated by applying: (a) the traditional temperature/precipitation bias correction described above versus (b) a bias correction based on the mapping between the historical model and observed derivative measures themselves. In addition, we will present and discuss examples of various application-based climate model evaluations, such as: (a) agricultural crop yield estimates and (b) species population viability estimates modeled using observed climate data vs. historical climate simulations.

  7. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  8. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  9. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  10. Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery

    NASA Astrophysics Data System (ADS)

    E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.

    2017-12-01

    In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.

  11. Application of artificial neural networks in nonlinear analysis of trusses

    NASA Technical Reports Server (NTRS)

    Alam, J.; Berke, L.

    1991-01-01

    A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.

  12. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  13. Adding Value to the Network: Exploring the Software as a Service and Platform as a Service Models for Mobile Operators

    NASA Astrophysics Data System (ADS)

    Gonçalves, Vânia

    The environments of software development and software provision are shifting to Web-based platforms supported by Platform/Software as a Service (PaaS/SaaS) models. This paper will make the case that there is equally an opportunity for mobile operators to identify additional sources of revenue by exposing network functionalities through Web-based service platforms. By elaborating on the concepts, benefits and risks of SaaS and PaaS, several factors that should be taken into consideration in applying these models to the telecom world are delineated.

  14. Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.

    PubMed

    Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore

    2015-12-01

    A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models

    NASA Astrophysics Data System (ADS)

    Lammers, R. W.; Bledsoe, B. P.

    2016-12-01

    Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.

  16. The use of multiple models in case-based diagnosis

    NASA Technical Reports Server (NTRS)

    Karamouzis, Stamos T.; Feyock, Stefan

    1993-01-01

    The work described in this paper has as its goal the integration of a number of reasoning techniques into a unified intelligent information system that will aid flight crews with malfunction diagnosis and prognostication. One of these approaches involves using the extensive archive of information contained in aircraft accident reports along with various models of the aircraft as the basis for case-based reasoning about malfunctions. Case-based reasoning draws conclusions on the basis of similarities between the present situation and prior experience. We maintain that the ability of a CBR program to reason about physical systems is significantly enhanced by the addition to the CBR program of various models. This paper describes the diagnostic concepts implemented in a prototypical case based reasoner that operates in the domain of in-flight fault diagnosis, the various models used in conjunction with the reasoner's CBR component, and results from a preliminary evaluation.

  17. Virtual-optical information security system based on public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben

    2005-01-01

    A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.

  18. Agriculture and future riverine nitrogen export to US coastal regions: Insights from the Nutrient Export from WaterSheds Model

    EPA Science Inventory

    We examine contemporary (2000) and future (2030) estimates of coastal N loads in the continental US by the Nutrient Export from WaterSheds (NEWS) model. Future estimates are based on Millennium Ecosystem Assessment (MEA) scenarios and two additional scenarios that reflect “...

  19. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  20. 3-D and quasi-2-D discrete element modeling of grain commingling in a bucket elevator boot system

    USDA-ARS?s Scientific Manuscript database

    Unwanted grain commingling impedes new quality-based grain handling systems and has proven to be an expensive and time consuming issue to study experimentally. Experimentally validated models may reduce the time and expense of studying grain commingling while providing additional insight into detail...

  1. The May Center for Early Childhood Education: Description of a Continuum of Services Model for Children with Autism.

    ERIC Educational Resources Information Center

    Campbell, Susan; Cannon, Barbara; Ellis, James T.; Lifter, Karen; Luiselli, James K.; Navalta, Carryl P.; Taras, Marie

    1998-01-01

    Describes a comprehensive continuum of services model for children with autism developed by a human services agency in Massachusetts, which incorporates these and additional empirically based approaches. Service components, methodologies, and program objectives are described, including representative summary data. Best practice approaches toward…

  2. Evaluating Cognitive Theory: A Joint Modeling Approach Using Responses and Response Times

    ERIC Educational Resources Information Center

    Klein Entink, Rinke H.; Kuhn, Jorg-Tobias; Hornke, Lutz F.; Fox, Jean-Paul

    2009-01-01

    In current psychological research, the analysis of data from computer-based assessments or experiments is often confined to accuracy scores. Response times, although being an important source of additional information, are either neglected or analyzed separately. In this article, a new model is developed that allows the simultaneous analysis of…

  3. Semiparametric Item Response Functions in the Context of Guessing

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2016-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  4. Virtual Transgenics: Using a Molecular Biology Simulation to Impact Student Academic Achievement and Attitudes

    ERIC Educational Resources Information Center

    Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva

    2012-01-01

    The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach.…

  5. Review of the systems biology of the immune system using agent-based models.

    PubMed

    Shinde, Snehal B; Kurhekar, Manish P

    2018-06-01

    The immune system is an inherent protection system in vertebrate animals including human beings that exhibit properties such as self-organisation, self-adaptation, learning, and recognition. It interacts with the other allied systems such as the gut and lymph nodes. There is a need for immune system modelling to know about its complex internal mechanism, to understand how it maintains the homoeostasis, and how it interacts with the other systems. There are two types of modelling techniques used for the simulation of features of the immune system: equation-based modelling (EBM) and agent-based modelling. Owing to certain shortcomings of the EBM, agent-based modelling techniques are being widely used. This technique provides various predictions for disease causes and treatments; it also helps in hypothesis verification. This study presents a review of agent-based modelling of the immune system and its interactions with the gut and lymph nodes. The authors also review the modelling of immune system interactions during tuberculosis and cancer. In addition, they also outline the future research directions for the immune system simulation through agent-based techniques such as the effects of stress on the immune system, evolution of the immune system, and identification of the parameters for a healthy immune system.

  6. Protein structure modeling for CASP10 by multiple layers of global optimization.

    PubMed

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  7. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  8. Transport link scanner: simulating geographic transport network expansion through individual investments

    NASA Astrophysics Data System (ADS)

    Jacobs-Crisioni, C.; Koopmans, C. C.

    2016-07-01

    This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of links and presumably contains those investment options that best meet private operator's objectives by balancing the revenues of additional fare against construction costs. The investment options consist of geographically plausible routes with potential detours. These routes are generated using a fine-meshed regularly latticed network and shortest path finding methods. Additionally, two indicators of the geographic accuracy of the simulated networks are introduced. A historical case study is presented to demonstrate the model's first results. These results show that the modelled networks reproduce relevant results of the historically built network with reasonable accuracy.

  9. New paradigms in internal architecture design and freeform fabrication of tissue engineering porous scaffolds.

    PubMed

    Yoo, Dongjin

    2012-07-01

    Advanced additive manufacture (AM) techniques are now being developed to fabricate scaffolds with controlled internal pore architectures in the field of tissue engineering. In general, these techniques use a hybrid method which combines computer-aided design (CAD) with computer-aided manufacturing (CAM) tools to design and fabricate complicated three-dimensional (3D) scaffold models. The mathematical descriptions of micro-architectures along with the macro-structures of the 3D scaffold models are limited by current CAD technologies as well as by the difficulty of transferring the designed digital models to standard formats for fabrication. To overcome these difficulties, we have developed an efficient internal pore architecture design system based on triply periodic minimal surface (TPMS) unit cell libraries and associated computational methods to assemble TPMS unit cells into an entire scaffold model. In addition, we have developed a process planning technique based on TPMS internal architecture pattern of unit cells to generate tool paths for freeform fabrication of tissue engineering porous scaffolds. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Precise Modelling of Telluric Features in Astronomical Spectra

    NASA Astrophysics Data System (ADS)

    Seifahrt, A.; Käufl, H. U.; Zängl, G.; Bean, J.; Richter, M.; Siebenmorgen, R.

    2010-12-01

    Ground-based astronomical observations suffer from the disturbing effects of the Earth's atmosphere. Oxygen, water vapour and a number of atmospheric trace gases absorb and emit light at discrete frequencies, shaping observing bands in the near- and mid-infrared and leaving their fingerprints - telluric absorption and emission lines - in astronomical spectra. The standard approach of removing the absorption lines is to observe a telluric standard star: a time-consuming and often imperfect solution. Alternatively, the spectral features of the Earth's atmosphere can be modelled using a radiative transfer code, often delivering a satisfying solution that removes these features without additional observations. In addition the model also provides a precise wavelength solution and an instrumental profile.

  11. Microfluidic devices for modeling cell-cell and particle-cell interactions in the microvasculature

    PubMed Central

    Prabhakarpandian, Balabhaskar; Shen, Ming-Che; Pant, Kapil; Kiani, Mohammad F.

    2011-01-01

    Cell-fluid and cell-cell interactions are critical components of many physiological and pathological conditions in the microvasculature. Similarly, particle-cell interactions play an important role in targeted delivery of therapeutics to tissue. Development of in vitro fluidic devices to mimic these microcirculatory processes has been a critical step forward in our understanding of the inflammatory process, development of nano-particulate drug carriers, and developing realistic in vitro models of the microvasculature and its surrounding tissue. However, widely used parallel plate flow based devices and assays have a number of important limitations for studying the physiological conditions in vivo. In addition, these devices are resource hungry and time consuming for performing various assays. Recently developed, more realistic, microfluidic based devices have been able to overcome many of these limitations. In this review, an overview of the fluidic devices and their use in studying the effects of shear forces on cell-cell and cell-particle interactions is presented. In addition, use of mathematical models and Computational Fluid Dynamics (CFD) based models for interpreting the complex flow patterns in the microvasculature are highlighted. Finally, the potential of 3D microfluidic devices and imaging for better representing in vivo conditions under which cell-cell and cell-particle interactions take place are discussed. PMID:21763328

  12. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  13. Effect of nutrition education intervention based on Pender's Health Promotion Model in improving the frequency and nutrient intake of breakfast consumption among female Iranian students.

    PubMed

    Dehdari, Tahereh; Rahimi, Tahereh; Aryaeian, Naheed; Gohari, Mahmood Reza

    2014-03-01

    To determine the effectiveness of nutrition education intervention based on Pender's Health Promotion Model in improving the frequency and nutrient intake of breakfast consumption among female Iranian students. The quasi-experimental study based on Pender's Health Promotion Model was conducted during April-June 2011. Information (data) was collected by self-administered questionnaire. In addition, a 3 d breakfast record was analysed. P < 0·05 was considered significant. Two middle schools in average-income areas of Qom, Iran. One hundred female middle-school students. There was a significant reduction in immediate competing demands and preferences, perceived barriers and negative activity-related affect constructs in the experimental group after education compared with the control group. In addition, perceived benefit, perceived self-efficacy, positive activity-related affect, interpersonal influences, situational influences, commitment to a plan of action, frequency and intakes of macronutrients and most micronutrients of breakfast consumption were also significantly higher in the experimental group compared with the control group after the nutrition education intervention. Constructs of Pender's Health Promotion Model provide a suitable source for designing strategies and content of a nutrition education intervention for improving the frequency and nutrient intake of breakfast consumption among female students.

  14. Using Petri nets for experimental design in a multi-organ elimination pathway.

    PubMed

    Reshetova, Polina; Smilde, Age K; Westerhuis, Johan A; van Kampen, Antoine H C

    2015-08-01

    Genistein is a soy metabolite with estrogenic activity that may result in (un)favorable effects on human health. Elucidation of the mechanisms through which food additives such as genistein exert their beneficiary effects is a major challenge for the food industry. A better understanding of the genistein elimination pathway could shed light on such mechanisms. We developed a Petri net model that represents this multi-organ elimination pathway and which assists in the design of future experiments. Using this model we show that metabolic profiles solely measured in venous blood are not sufficient to uniquely parameterize the model. Based on simulations we suggest two solutions that provide better results: parameterize the model using gut epithelium profiles or add additional biological constrains in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Micropolar continuum modelling of bi-dimensional tetrachiral lattices

    PubMed Central

    Chen, Y.; Liu, X. N.; Hu, G. K.; Sun, Q. P.; Zheng, Q. S.

    2014-01-01

    The in-plane behaviour of tetrachiral lattices should be characterized by bi-dimensional orthotropic material owing to the existence of two orthogonal axes of rotational symmetry. Moreover, the constitutive model must also represent the chirality inherent in the lattices. To this end, a bi-dimensional orthotropic chiral micropolar model is developed based on the theory of irreducible orthogonal tensor decomposition. The obtained constitutive tensors display a hierarchy structure depending on the symmetry of the underlying microstructure. Eight additional material constants, in addition to five for the hemitropic case, are introduced to characterize the anisotropy under Z2 invariance. The developed continuum model is then applied to a tetrachiral lattice, and the material constants of the continuum model are analytically derived by a homogenization process. By comparing with numerical simulations for the discrete lattice, it is found that the proposed continuum model can correctly characterize the static and wave properties of the tetrachiral lattice. PMID:24808754

  16. CFD Code Development for Combustor Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.

  17. Singlet model interference effects with high scale UV physics

    DOE PAGES

    Dawson, S.; Lewis, I. M.

    2017-01-06

    One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less

  18. Quantum vision in three dimensions

    NASA Astrophysics Data System (ADS)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  19. Rule-based modeling with Virtual Cell

    PubMed Central

    Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.

    2016-01-01

    Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444

  20. Small-molecule ligand docking into comparative models with Rosetta

    PubMed Central

    Combs, Steven A; DeLuca, Samuel L; DeLuca, Stephanie H; Lemmon, Gordon H; Nannemann, David P; Nguyen, Elizabeth D; Willis, Jordan R; Sheehan, Jonathan H; Meiler, Jens

    2017-01-01

    Structure-based drug design is frequently used to accelerate the development of small-molecule therapeutics. Although substantial progress has been made in X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, the availability of high-resolution structures is limited owing to the frequent inability to crystallize or obtain sufficient NMR restraints for large or flexible proteins. Computational methods can be used to both predict unknown protein structures and model ligand interactions when experimental data are unavailable. This paper describes a comprehensive and detailed protocol using the Rosetta modeling suite to dock small-molecule ligands into comparative models. In the protocol presented here, we review the comparative modeling process, including sequence alignment, threading and loop building. Next, we cover docking a small-molecule ligand into the protein comparative model. In addition, we discuss criteria that can improve ligand docking into comparative models. Finally, and importantly, we present a strategy for assessing model quality. The entire protocol is presented on a single example selected solely for didactic purposes. The results are therefore not representative and do not replace benchmarks published elsewhere. We also provide an additional tutorial so that the user can gain hands-on experience in using Rosetta. The protocol should take 5–7 h, with additional time allocated for computer generation of models. PMID:23744289

  1. Computer-aided design of liposomal drugs: In silico prediction and experimental validation of drug candidates for liposomal remote loading.

    PubMed

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-10

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.

  2. Computer-aided design of liposomal drugs: in silico prediction and experimental validation of drug candidates for liposomal remote loading

    PubMed Central

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-01

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343

  3. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy

    PubMed Central

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-01-01

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework. PMID:28772504

  4. Will building new reservoirs always help increase the water supply reliability? - insight from a modeling-based global study

    NASA Astrophysics Data System (ADS)

    Zhuang, Y.; Tian, F.; Yigzaw, W.; Hejazi, M. I.; Li, H. Y.; Turner, S. W. D.; Vernon, C. R.

    2017-12-01

    More and more reservoirs are being build or planned in order to help meet the increasing water demand all over the world. However, is building new reservoirs always helpful to water supply? To address this question, the river routing module of Global Change Assessment Model (GCAM) has been extended with a simple yet physical-based reservoir scheme accounting for irrigation, flood control and hydropower operations at each individual reservoir. The new GCAM river routing model has been applied over the global domain with the runoff inputs from the Variable Infiltration Capacity Model. The simulated streamflow is validated at 150 global river basins where the observed streamflow data are available. The model performance has been significantly improved at 77 basins and worsened at 35 basins. To facilitate the analysis of additional reservoir storage impacts at the basin level, a lumped version of GCAM reservoir model has been developed, representing a single lumped reservoir at each river basin which has the regulation capacity of all reservoir combined. A Sequent Peak Analysis is used to estimate how much additional reservoir storage is required to satisfy the current water demand. For basins with water deficit, the water supply reliability can be improved with additional storage. However, there is a threshold storage value at each basin beyond which the reliability stops increasing, suggesting that building new reservoirs will not help better relieve the water stress. Findings in the research can be helpful to the future planning and management of new reservoirs.

  5. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy.

    PubMed

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-02-08

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework.

  6. Application research of 3D additive manufacturing technology in the nail shell

    NASA Astrophysics Data System (ADS)

    Xiao, Shanhua; Yan, Ruiqiang; Song, Ning

    2018-04-01

    Based on the analysis of hierarchical slicing algorithm, 3D scanning of enterprise product nailing handle case file is carried out, point cloud data processing is performed on the source file, and the surface modeling and innovative design of nail handling handle case are completed. Using MakerBot Replicator2X-based 3D printer for layered 3D print samples, for the new nail product development to provide reverse modeling and rapid prototyping technical support.

  7. Laser Scattering from the Dense Plasma Focus.

    DTIC Science & Technology

    plasma focus (DPF) illuminated by a pulse of laser light. Scattering was observable from 10 nanoseconds prior to arrival of the collapse on axis and for an additional 50 nanoseconds. The frequency spectrum is markedly asymmetric about the laser frequency, a feature which is inconsistent with spectral expectations based on thermal particle distributions even if particle drifts or waves excitations are included. A model is postulated which attributes the asymmetry to lateral displacement of scattering region from the axis of the focus. Analysis based on this model yields

  8. Establishing Alpha Oph as a Prototype Rotator: Improved Astrometric Orbit

    DTIC Science & Technology

    2011-01-10

    astrometric characterization of the companion orbit. We also use photometry from these observations to derive a model-based estimate of the companion mass. A...uncertainties. In addition to the dynamically derived masses, we use IJHK photometry to derive a model-based mass for α Oph B, of 0.77 ± 0.05 M...man 1966; Gatewood 2005) with a 8.62 yr period, well estab- lished over several decades of monitoring and first resolved by McCarthy (1983). But a

  9. Application of multimedia models for screening assessment of long-range transport potential and overall persistence.

    PubMed

    Klasmeier, Jörg; Matthies, Michael; Macleod, Matthew; Fenner, Kathrin; Scheringer, Martin; Stroebe, Maximilian; Le Gall, Anne Christine; Mckone, Thomas; Van De Meent, Dik; Wania, Frank

    2006-01-01

    We propose a multimedia model-based methodology to evaluate whether a chemical substance qualifies as POP-like based on overall persistence (Pov) and potential for long-range transport (LRTP). It relies upon screening chemicals against the Pov and LRTP characteristics of selected reference chemicals with well-established environmental fates. Results indicate that chemicals of high and low concern in terms of persistence and long-range transport can be consistently identified by eight contemporary multimedia models using the proposed methodology. Model results for three hypothetical chemicals illustrate that the model-based classification of chemicals according to Pov and LRTP is not always consistent with the single-media half-life approach proposed by the UNEP Stockholm Convention and thatthe models provide additional insight into the likely long-term hazards associated with chemicals in the environment. We suggest this model-based classification method be adopted as a complement to screening against defined half-life criteria at the initial stages of tiered assessments designed to identify POP-like chemicals and to prioritize further environmental fate studies for new and existing chemicals.

  10. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  11. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  12. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  13. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Insecure attachment style as a vulnerability factor for depression: recent findings in a community-based study of Malay single and married mothers.

    PubMed

    Abdul Kadir, Nor Ba'yah; Bifulco, Antonia

    2013-12-30

    The role of marital breakdown in women's mental health is of key concern in Malaysia and internationally. A cross-sectional questionnaire study of married and separated/divorced and widowed women examined insecure attachment style as an associated risk factor for depression among 1002 mothers in an urban community in Malaysia. A previous report replicated a UK-based vulnerability-provoking agent model of depression involving negative evaluation of self (NES) and negative elements in close relationships (NECRs) interacting with severe life events to model depression. This article reports on the additional contribution of insecure attachment style to the model using the Vulnerable Attachment Style Questionnaire (VASQ). The results showed that VASQ scores were highly correlated with NES, NECR and depression. A multiple regression analysis of depression with backward elimination found that VASQ scores had a significant additional effect. Group comparisons showed different risk patterns for single and married mothers. NES was the strongest risk factor for both groups, with the 'anxious style' subset of the VASQ being the best additional predictor for married mothers and the total VASQ score (general attachment insecurity) for single mothers. The findings indicate that attachment insecurity adds to a psychosocial vulnerability model of depression among mothers cross-culturally and is important in understanding and identifying risk. © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Quantitative photoacoustic imaging in the acoustic regime using SPIM

    NASA Astrophysics Data System (ADS)

    Beigl, Alexander; Elbau, Peter; Sadiq, Kamran; Scherzer, Otmar

    2018-05-01

    While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to the recovery of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows us to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.

  16. State-based versus reward-based motivation in younger and older adults.

    PubMed

    Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd

    2014-12-01

    Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.

  17. Clinical implications and economic impact of accuracy differences among commercially available blood glucose monitoring systems.

    PubMed

    Budiman, Erwin S; Samant, Navendu; Resch, Ansgar

    2013-03-01

    Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.

  18. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  19. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  20. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  1. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  2. Integrating GLL-Weibull Distribution Within a Bayesian Framework for Life Prediction of Shape Memory Alloy Spring Undergoing Thermo-mechanical Fatigue

    NASA Astrophysics Data System (ADS)

    Kundu, Pradeep; Nath, Tameshwer; Palani, I. A.; Lad, Bhupesh K.

    2018-06-01

    The present paper tackles an important but unmapped problem of the reliability estimations of smart materials. First, an experimental setup is developed for accelerated life testing of the shape memory alloy (SMA) springs. Generalized log-linear Weibull (GLL-Weibull) distribution-based novel approach is then developed for SMA spring life estimation. Applied stimulus (voltage), elongation and cycles of operation are used as inputs for the life prediction model. The values of the parameter coefficients of the model provide better interpretability compared to artificial intelligence based life prediction approaches. In addition, the model also considers the effect of operating conditions, making it generic for a range of the operating conditions. Moreover, a Bayesian framework is used to continuously update the prediction with the actual degradation value of the springs, thereby reducing the uncertainty in the data and improving the prediction accuracy. In addition, the deterioration of material with number of cycles is also investigated using thermogravimetric analysis and scanning electron microscopy.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCulloch, M; Polan, D; Feng, M

    Purpose: Previous studies have shown that radiotherapy treatment for liver metastases causes marked liver hypertrophy in areas receiving low dose and atrophy/fibrosis in areas receiving high dose. The purpose of this work is to develop and evaluate a biomechanical model-based dose-response model to describe these liver responses to SBRT. Methods: In this retrospective study, a biomechanical model-based deformable registration algorithm, Morfeus, was expanded to include dose-based boundary conditions. Liver and tumor volumes were contoured on the planning images and CT/MR images three months post-RT and converted to finite element models. A thermal expansion-based relationship correlating the delivered dose and volumemore » response was generated from 22 patients previously treated. This coefficient, combined with the planned dose, was applied as an additional boundary condition to describe the volumetric response of the liver of an additional cohort of metastatic liver patients treated with SBRT. The accuracy of the model was evaluated based on overall volumetric liver comparisons and the target registration error (TRE) using the average deviations in positions of identified vascular bifurcations on each set of registered images, with a target accuracy of the 2.5mm isotropic dose grid (vector dimension 4.3mm). Results: The thermal expansion coefficient models the volumetric change of the liver to within 3%. The accuracy of Morfeus with dose-expansion boundary conditions a TRE of 5.7±2.8mm compared to 11.2±3.7mm using rigid registration and 8.9±0.28mm using Morfeus with only spatial boundary conditions. Conclusion: A biomechanical model has been developed to describe the volumetric and spatial response of the liver to SBRT. This work will enable the improvement of correlating functional imaging with delivered dose, the mapping of the delivered dose from one treatment onto the planning images for a subsequent treatment, and will further provide information to assist with the biological characterization of patients’ response to radiation.« less

  4. Online decision support based on modeling with the aim of increased irrigation efficiency

    NASA Astrophysics Data System (ADS)

    Dövényi-Nagy, Tamás; Bakó, Károly; Molnár, Krisztina; Rácz, Csaba; Vasvári, Gyula; Nagy, János; Dobos, Attila

    2015-04-01

    The significant changes in the structure of ownership and control of irrigation infrastructure in the past decades resultted in the decrease of total irrigable and irrigated area (Szilárd, 1999). In this paper, the development of a model-based online service is described whose aim is to aid reasonable irrigation practice and increase water use efficiency. In order to establish a scientific background for irrigation, an agrometeorological station network has been built up by the Agrometeorological and Agroecological Monitoring Centre. A website has been launched in order to provide direct access for local agricultural producers to both the measured weather parameters and results of model based calculations. The public site provides information for general use, registered partners get a handy model based toolkit for decision support at the plot level concerning irrigation, plant protection or frost forecast. The agrometeorological reference station network was established in the recent years by the Agrometeorological and Agroecological Monitoring Centre and is distributed to cover most of the irrigated cropland areas of Hungary. From the spatial aspect, the stations have been deployed mainly in Eastern Hungary with concentrated irrigation infrastructure. The meteorological stations' locations have been carefully chosen to represent their environment in terms of soil, climatic and topographic factors, thereby assuring relevant and up-to-date input data for the models. The measured parameters range from classic meteorological data (air temperature, relative humidity, solar irradiation, wind speed etc.) to specific data which are not available from other services in the region, such as soil temperature, soil water content in multiple depths and leaf wetness. In addition to the basic grid of reference stations, specific stations under irrigated conditions have been deployed to calibrate and validate the models. A specific modeling framework (MetAgro) has been developed to allow the integration of several public available models and algorithms adapted to local climate (Rácz et al., 2013). The service, the server side framework, scripts and the front-end, providing access to the measured and modeled data, are based on own developments or free available and/or open source softwares and services like Apache, PHP, MySQL and Google Maps API. MetAgro intends to accomplish functionalities of three different areas of usage: research, education and practice. The members differ in educational background, knowledge of models and possibilities to access relevant input data. The system and interfaces must reflect these differences that is accomplished by the degradation of modeling: choosing the place of the farm and the crop already gives some general results, but with every additional parameter given the results are more reliable. The system 'MetAgro' provides a basis for improved decision-making with regard to irrigation on cropland. Based on experiences and feedback, the online application was proved to be useful in the design and practice of reasonable irrigation. In addition to its use in irrigation practice, MetAgro is also a valuable tool for research and education.

  5. Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage

    NASA Astrophysics Data System (ADS)

    Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing

    2018-02-01

    With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.

  6. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  7. Performance Investigation of FSO-OFDM Communication Systems under the Heavy Rain Weather

    NASA Astrophysics Data System (ADS)

    Rashidi, Florence; He, Jing; Chen, Lin

    2017-12-01

    The challenge in the free-space optical (FSO) communication is the propagation of optical signal through different atmospheric conditions such as rain, snow and fog. In this paper, an orthogonal frequency-division multiplexing technique (OFDM) is proposed in the FSO communication system. Meanwhile, considering the rain attenuation models based on Marshal & Palmer and Carbonneau models, the performance of FSO communication system based on the OFDM is evaluated under the heavy-rain condition in Changsha, China. The simulation results show that, under a heavy-rainfall condition of 106.18 mm/h, with an attenuation factor of 7 dB/km based on the Marshal & Palmer model, the bit rate of 2.5 and 4.0 Gbps data can be transmitted over the FSO channels of 1.6 and 1.3 km, respectively, and the bit error rate of less than 1E - 4 can be achieved. In addition, the effect on rain attenuation over the FSO communication system based on the Marshal & Palmer model is less than that of the Carbonneau model.

  8. Machine learning models for lipophilicity and their domain of applicability.

    PubMed

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-01-01

    Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.

  9. An experimental determination in Calspan Ludwieg tube of the base environment of the integrated space shuttle vehicle at simulated Mach 4.5 flight conditions (test IH5 of model 19-OTS)

    NASA Technical Reports Server (NTRS)

    Drzewiecki, R. F.; Foust, J. W.

    1976-01-01

    A model test program was conducted to determine heat transfer and pressure distributions in the base region of the space shuttle vehicle during simulated launch trajectory conditions of Mach 4.5 and pressure altitudes between 90,000 and 210,000 feet. Model configurations with and without the solid propellant booster rockets were examined to duplicate pre- and post-staging vehicle geometries. Using short duration flow techniques, a tube wind tunnel provided supersonic flow over the model. Simultaneously, combustion generated exhaust products reproduced the gasdynamic and thermochemical structure of the main vehicle engine plumes. Heat transfer and pressure measurements were made at numerous locations on the base surfaces of the 19-OTS space shuttle model with high response instrumentation. In addition, measurements of base recovery temperature were made indirectly by using dual fine wire and resistance thermometers and by extrapolating heat transfer measurements.

  10. Empowering Effective STEM Role Models to Promote STEM Equity in Local Communities

    NASA Astrophysics Data System (ADS)

    Harte, T.; Taylor, J.

    2017-12-01

    Empowering Effective STEM Role Models, a three-hour training developed and successfully implemented by NASA Langley Research Center's Science Directorate, is an effort to encourage STEM professionals to serve as role models within their community. The training is designed to help participants reflect on their identity as a role model and provide research-based strategies to effectively engage youth, particularly girls, in STEM (science, technology, engineering, and mathematics). Research shows that even though girls and boys do not demonstrate a significant difference in their ability to be successful in mathematics and science, there is a significant difference in their confidence level when participating in STEM subject matter and pursuing STEM careers. The Langley training model prepares professionals to disrupt this pattern and take on the habits and skills of effective role models. The training model is based on other successful models and resources for role modeling in STEM including SciGirls; the National Girls Collaborative; and publications by the American Association of University Women and the National Academies. It includes a significant reflection component, and participants walk through situation-based scenarios to practice a focused suite of research-based strategies. These strategies can be implemented in a variety of situations and adapted to the needs of groups that are underrepresented in STEM fields. Underpinning the training and the discussions is the fostering of a growth mindset and promoting perseverance. "The Power of Yet" becomes a means whereby role models encourage students to believe in themselves, working toward reaching their goals and dreams in the area of STEM. To provide additional support, NASA Langley role model trainers are available to work with a champion at other organizations to facilitate the training. This champion helps recruit participants, seeks leadership buy-in, and helps provide valuable insights for needs and interests specific to the organization. After the in-person training experience, participants receive additional follow-up support by working with their local champions and the NASA Langley trainers. The goal is to share the role model training model in an effort to empower STEM role models and assist in promoting STEM Equity in all communities.

  11. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  12. Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.

    PubMed

    Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad

    2018-04-21

    In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.

  13. A Physically-Based and Distributed Tool for Modeling the Hydrological and Mechanical Processes of Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Noto, L. V.; Dialynas, Y. G.; Caracciolo, D.; Bras, R. L.

    2015-12-01

    This work presents the capabilities of a model, i.e. the tRIBS-VEGGIE-Landslide, in two different versions, i.e. developed within a probabilistic framework and coupled with a root cohesion module. The probabilistic model treats geotechnical and soil retention curve parameters as random variables across the basin and estimates theoretical probability distributions of slope stability and the associated "factor of safety" commonly used to describe the occurrence of shallow landslides. The derived distributions are used to obtain the spatio-temporal dynamics of probability of failure, conditioned on soil moisture dynamics at each watershed location. The framework has been tested in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure. Another version of the model accounts for the additional cohesion exerted by vegetation roots. The approach is to use the Fiber Bundle Model (FBM) framework that allows for the evaluation of the root strength as a function of the stress-strain relationships of bundles of fibers. The model requires the knowledge of the root architecture to evaluate the additional reinforcement from each root diameter class. The root architecture is represented with a branching topology model based on Leonardo's rule. The methodology has been tested on a simple case study to explore the role of both hydrological and mechanical root effects. Results demonstrate that the effects of root water uptake can at times be more significant than the mechanical reinforcement; and that the additional resistance provided by roots depends heavily on the vegetation root structure and length.

  14. A statistical approach to develop a detailed soot growth model using PAH characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael

    A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less

  15. Cost-effectiveness of omalizumab add-on to standard-of-care therapy in patients with uncontrolled severe allergic asthma in a Brazilian healthcare setting.

    PubMed

    Suzuki, Cibele; Lopes da Silva, Nilceia; Kumar, Praveen; Pathak, Purnima; Ong, Siew Hwa

    2017-08-01

    Omalizumab add-on to standard-of-care therapy has proven to be efficacious in severe asthma patients for whom exacerbations cannot be controlled otherwise. Moreover, evidence from different healthcare settings suggests reduced healthcare resource utilization with omalizumab. Based on these findings, this study aimed to assess the cost-effectiveness of the addition of omalizumab to standard-of-care therapy in patients with uncontrolled severe allergic asthma in a Brazilian healthcare setting. A previously published Markov model was adapted using Brazil-specific unit costs to compare the costs and outcomes of the addition of omalizumab to standard-of-care therapy vs standard-of-care therapy alone. Model inputs were largely based on the eXpeRience study. Costs and health outcomes were calculated for lifetime-years and were annually discounted at 5%. Both one-way and probabilistic sensitivity analyses were performed. An additional cost of R$280,400 for 5.20 additional quality-adjusted life-years was estimated with the addition of omalizumab to standard-of-care therapy, resulting in an incremental cost-effectiveness ratio of R$53,890. One-way sensitivity analysis indicated that discount rates, standard-of-care therapy exacerbation rates, and exacerbation-related mortality rates had the largest impact on incremental cost-effectiveness ratios. Assumptions of lifetime treatment adherence and rate of future exacerbations, independent of previous events, might affect the findings. The lack of Brazilian patients in the eXpeRience study may affect the findings, although sample size and baseline characteristics suggest that the modeled population closely resembles Brazilian severe allergic asthma patients. Results indicate that omalizumab as an add-on therapy is more cost-effective than standard-of-care therapy alone for Brazilian patients with uncontrolled severe allergic asthma, based on the World Health Organization's cost-effectiveness threshold of up to 3-times the gross domestic product.

  16. A magnetic model for low/hard state of black hole binaries

    NASA Astrophysics Data System (ADS)

    Ye, Yong-Chun; Wang, Ding-Xiong; Huang, Chang-Yin; Cao, Xiao-Feng

    2016-03-01

    A magnetic model for the low/hard state (LHS) of two black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on transport of the magnetic field from a companion into an accretion disk around a black hole (BH). This model consists of a truncated thin disk with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising phase of the LHS. In addition, the association of the LHS with a quasi-steady jet is modeled based on transport of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio/X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model.

  17. A Bayesian compound stochastic process for modeling nonstationary and nonhomogeneous sequence evolution.

    PubMed

    Blanquart, Samuel; Lartillot, Nicolas

    2006-11-01

    Variations of nucleotidic composition affect phylogenetic inference conducted under stationary models of evolution. In particular, they may cause unrelated taxa sharing similar base composition to be grouped together in the resulting phylogeny. To address this problem, we developed a nonstationary and nonhomogeneous model accounting for compositional biases. Unlike previous nonstationary models, which are branchwise, that is, assume that base composition only changes at the nodes of the tree, in our model, the process of compositional drift is totally uncoupled from the speciation events. In addition, the total number of events of compositional drift distributed across the tree is directly inferred from the data. We implemented the method in a Bayesian framework, relying on Markov Chain Monte Carlo algorithms, and applied it to several nucleotidic data sets. In most cases, the stationarity assumption was rejected in favor of our nonstationary model. In addition, we show that our method is able to resolve a well-known artifact. By Bayes factor evaluation, we compared our model with 2 previously developed nonstationary models. We show that the coupling between speciations and compositional shifts inherent to branchwise models may lead to an overparameterization, resulting in a lesser fit. In some cases, this leads to incorrect conclusions, concerning the nature of the compositional biases. In contrast, our compound model more flexibly adapts its effective number of parameters to the data sets under investigation. Altogether, our results show that accounting for nonstationary sequence evolution may require more elaborate and more flexible models than those currently used.

  18. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  19. ION Configuration Editor

    NASA Technical Reports Server (NTRS)

    Borgen, Richard L.

    2013-01-01

    The configuration of ION (Inter - planetary Overlay Network) network nodes is a manual task that is complex, time-consuming, and error-prone. This program seeks to accelerate this job and produce reliable configurations. The ION Configuration Editor is a model-based smart editor based on Eclipse Modeling Framework technology. An ION network designer uses this Eclipse-based GUI to construct a data model of the complete target network and then generate configurations. The data model is captured in an XML file. Intrinsic editor features aid in achieving model correctness, such as field fill-in, type-checking, lists of valid values, and suitable default values. Additionally, an explicit "validation" feature executes custom rules to catch more subtle model errors. A "survey" feature provides a set of reports providing an overview of the entire network, enabling a quick assessment of the model s completeness and correctness. The "configuration" feature produces the main final result, a complete set of ION configuration files (eight distinct file types) for each ION node in the network.

  20. A physiologically based model for tramadol pharmacokinetics in horses.

    PubMed

    Abbiati, Roberto Andrea; Cagnardi, Petra; Ravasio, Giuliano; Villa, Roberto; Manca, Davide

    2017-09-21

    This work proposes an application of a minimal complexity physiologically based pharmacokinetic model to predict tramadol concentration vs time profiles in horses. Tramadol is an opioid analgesic also used for veterinary treatments. Researchers and medical doctors can profit from the application of mathematical models as supporting tools to optimize the pharmacological treatment of animal species. The proposed model is based on physiology but adopts the minimal compartmental architecture necessary to describe the experimental data. The model features a system of ordinary differential equations, where most of the model parameters are either assigned or individualized for a given horse, using literature data and correlations. Conversely, residual parameters, whose value is unknown, are regressed exploiting experimental data. The model proved capable of simulating pharmacokinetic profiles with accuracy. In addition, it provides further insights on un-observable tramadol data, as for instance tramadol concentration in the liver or hepatic metabolism and renal excretion extent. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. School-Based Management: Promise and Process. CPRE Finance Briefs.

    ERIC Educational Resources Information Center

    Wohlstetter, Priscilla; Mohrman, Susan Albers

    This publication summarizes research that investigated how school-based management (SBM) can be implemented for long-term school improvement. It is argued that a successful SBM plan must be part of a quest for improvement and utilize a "high involvement" model. In addition to having more power, schools need knowledge of the organization,…

  2. Collaborative data model and data base development for paleoenvironmental and archaeological domain using Semantic MediaWiki

    NASA Astrophysics Data System (ADS)

    Willmes, C.

    2017-12-01

    In the frame of the Collaborative Research Centre 806 (CRC 806) an interdisciplinary research project, that needs to manage data, information and knowledge from heterogeneous domains, such as archeology, cultural sciences, and the geosciences, a collaborative internal knowledge base system was developed. The system is based on the open source MediaWiki software, that is well known as the software that enables Wikipedia, for its facilitation of a web based collaborative knowledge and information management platform. This software is additionally enhanced with the Semantic MediaWiki (SMW) extension, that allows to store and manage structural data within the Wiki platform, as well as it facilitates complex query and API interfaces to the structured data stored in the SMW data base. Using an additional open source software called mobo, it is possible to improve the data model development process, as well as automated data imports, from small spreadsheets to large relational databases. Mobo is a command line tool that helps building and deploying SMW structure in an agile, Schema-Driven Development way, and allows to manage and collaboratively develop the data model formalizations, that are formalized in JSON-Schema format, using version control systems like git. The combination of a well equipped collaborative web platform facilitated by Mediawiki, the possibility to store and query structured data in this collaborative database provided by SMW, as well as the possibility for automated data import and data model development enabled by mobo, result in a powerful but flexible system to build and develop a collaborative knowledge base system. Furthermore, SMW allows the application of Semantic Web technology, the structured data can be exported into RDF, thus it is possible to set a triple-store including a SPARQL endpoint on top of the database. The JSON-Schema based data models, can be enhanced into JSON-LD, to facilitate and profit from the possibilities of Linked Data technology.

  3. Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model

    NASA Astrophysics Data System (ADS)

    Wilton, Darren C.

    The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).

  4. Modelling of a holographic interferometry based calorimeter for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Beigzadeh, A. M.; Vaziri, M. R. Rashidian; Ziaie, F.

    2017-08-01

    In this research work, a model for predicting the behaviour of holographic interferometry based calorimeters for radiation dosimetry is introduced. Using this technique for radiation dosimetry via measuring the variations of refractive index due to energy deposition of radiation has several considerable advantages such as extreme sensitivity and ability of working without normally used temperature sensors that disturb the radiation field. We have shown that the results of our model are in good agreement with the experiments performed by other researchers under the same conditions. This model also reveals that these types of calorimeters have the additional and considerable merits of transforming the dose distribution to a set of discernible interference fringes.

  5. Gravitational field modes GEM 3 and 4

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Wagner, C. A.; Putney, B. H.; Sandson, M. L.; Brownd, J. E.; Richardson, J. A.; Taylor, W. A.

    1972-01-01

    A refinement in the satellite geopotential solution for a Goddard Earth Model (GEM 3) was obtained. The solution includes the addition of two low inclination satellites, SAS at 3 deg and PEOLE at 15 deg, and is based upon 27 close earth satellites containing some 400,000 observations of electronic, laser, and optical data. In addition, a new combination satellite/gravimetry solution (GEM 4) was derived. The new model includes 61 center of mass tracking station locations with data from GRARR, Laser, MOTS, Baker-Nunn, and NWL Tranet Doppler tracking sites. Improvement was obtained for the zonal coefficients of the new models and is shown by tests on the long period perturbations of the orbits. Individual zonal coefficients agree very closely among different models that contain low inclination satellites. Tests of models with surface gravity data show that the GEM 3 satellite model has significantly better agreement with the gravimetry data than the GEM 1 satellite model, and that it also has better agreement with the gravimetry data than the 1969 SAO Standard Earth 2 model.

  6. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  7. An inventory model with random demand

    NASA Astrophysics Data System (ADS)

    Mitsel, A. A.; Kritski, O. L.; Stavchuk, LG

    2017-01-01

    The article describes a three-product inventory model with random demand at equal frequencies of delivery. A feature of this model is that the additional purchase of resources required is carried out within the scope of their deficit. This fact allows reducing their storage costs. A simulation based on the data on arrival of raw and materials at an enterprise in Kazakhstan has been prepared. The proposed model is shown to enable savings up to 40.8% of working capital.

  8. Rational F-theory GUTs without exotics

    NASA Astrophysics Data System (ADS)

    Krippendorf, Sven; Peña, Damián Kaloni Mayorga; Oehlmann, Paul-Konstantin; Ruehle, Fabian

    2014-07-01

    We construct F-theory GUT models without exotic matter, leading to the MSSM matter spectrum with potential singlet extensions. The interplay of engineering explicit geometric setups, absence of four-dimensional anomalies, and realistic phenomenology of the couplings places severe constraints on the allowed local models in a given geometry. In constructions based on the spectral cover we find no model satisfying all these requirements. We then provide a survey of models with additional U(1) symmetries arising from rational sections of the elliptic fibration in toric constructions and obtain phenomenologically appealing models based on SU(5) tops. Furthermore we perform a bottom-up exploration beyond the toric section constructions discussed in the literature so far and identify benchmark models passing all our criteria, which can serve as a guideline for future geometric engineering.

  9. A reduced order electrochemical and thermal model for a pouch type lithium ion polymer battery with LiNixMnyCo1-x-yO2/LiFePO4 blended cathode

    NASA Astrophysics Data System (ADS)

    Li, Xueyan; Choe, Song-Yul; Joe, Won Tae

    2015-10-01

    LiNixMnyCo1-x-yO2 (NMC) and LiFePO4 (LFP) as a cathode material have been widely employed for cells designed for high power applications. However, NMC needs further improvements in rate capability and stability that can be accomplished by blending it with LFP. Working mechanism of the blended cells is very complex and hard to understand. In addition, characteristics of the blended cells, particularly the plateau and path dependence of LFP materials, make it extremely difficult to estimate the state of charge and state of health using classical electric equivalent circuit models. Therefore, a reduced order model based on electrochemical and thermal principles is developed with objectives for real time applications and validated against experimental data collected from a large format pouch type of lithium ion polymer battery. The model for LFP is based on a shrinking core model along with moving boundary and then integrated into NMC model. Responses of the model that include SOC estimation and responses of current and voltage are compared with those of experiments at CC/CV charging and CC discharging along with different current rates and temperatures. In addition, the model is used to analyze effects of mass ratios between two materials on terminal voltage and heat generation rate.

  10. Understanding the drug release mechanism from a montmorillonite matrix and its binary mixture with a hydrophilic polymer using a compartmental modelling approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.

    2018-03-01

    Drug release from a montmorillonite (MMT) matrix is a complex mechanism controlled by swelling mechanism of MMT and an interaction of drug and MMT. The aim of this research was to explain a suitable model of the drug release mechanism from MMT and its binary mixture with a hydrophilic polymer in the controlled release formulation based on a compartmental modelling approach. Theophylline was used as a drug model and incorporated into MMT and a binary mixture with hydroxyl propyl methyl cellulose (HPMC) as a hydrophilic polymer, by a kneading method. The dissolution test was performed and the modelling of drug release was assisted by a WinSAAM software. A 2 model was purposed based on the swelling capability and basal spacing of MMT compartments. The model evaluation was carried out to goodness of fit and statistical parameters and models were validated by a cross-validation technique. The drug release from MMT matrix regulated by a burst release mechanism of unloaded drug, swelling ability, basal spacing of MMT compartment, and equilibrium between basal spacing and swelling compartments. Furthermore, the addition of HPMC in MMT system altered the presence of swelling compartment and equilibrium between swelling and basal spacing compartment systems. In addition, a hydrophilic polymer reduced the burst release mechanism of unloaded drug.

  11. Phase-field-based lattice Boltzmann model for incompressible binary fluid systems with density and viscosity contrasts.

    PubMed

    Zu, Y Q; He, S

    2013-04-01

    A lattice Boltzmann model (LBM) is proposed based on the phase-field theory to simulate incompressible binary fluids with density and viscosity contrasts. Unlike many existing diffuse interface models which are limited to density matched binary fluids, the proposed model is capable of dealing with binary fluids with moderate density ratios. A new strategy for projecting the phase field to the viscosity field is proposed on the basis of the continuity of viscosity flux. The new LBM utilizes two lattice Boltzmann equations (LBEs): one for the interface tracking and the other for solving the hydrodynamic properties. The LBE for interface tracking can recover the Chan-Hilliard equation without any additional terms; while the LBE for hydrodynamic properties can recover the exact form of the divergence-free incompressible Navier-Stokes equations avoiding spurious interfacial forces. A series of 2D and 3D benchmark tests have been conducted for validation, which include a rigid-body rotation, stationary and moving droplets, a spinodal decomposition, a buoyancy-driven bubbly flow, a layered Poiseuille flow, and the Rayleigh-Taylor instability. It is shown that the proposed method can track the interface with high accuracy and stability and can significantly and systematically reduce the parasitic current across the interface. Comparisons with momentum-based models indicate that the newly proposed velocity-based model can better satisfy the incompressible condition in the flow fields, and eliminate or reduce the velocity fluctuations in the higher-pressure-gradient region and, therefore, achieve a better numerical stability. In addition, the test of a layered Poiseuille flow demonstrates that the proposed scheme for mixture viscosity performs significantly better than the traditional mixture viscosity methods.

  12. Nature and prevalence of non-additive toxic effects in industrially relevant mixtures of organic chemicals.

    PubMed

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2009-06-01

    The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.

  13. Atmospheric radiation modeling of galactic cosmic rays using LRO/CRaTER and the EMMREM model with comparisons to balloon and airline based measurements

    NASA Astrophysics Data System (ADS)

    Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.

    2016-09-01

    We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.

  14. Additive manufacturing: From implants to organs.

    PubMed

    Douglas, Tania S

    2014-05-12

    Additive manufacturing (AM) constructs 3D objects layer by layer under computer control from 3D models. 3D printing is one example of this kind of technology. AM offers geometric flexibility in its products and therefore allows customisation to suit individual needs. Clinical success has been shown with models for surgical planning, implants, assistive devices and scaffold-based tissue engineering. The use of AM to print tissues and organs that mimic nature in structure and function remains an elusive goal, but has the potential to transform personalised medicine, drug development and scientific understanding of the mechanisms of disease. 

  15. A reference Pelton turbine design

    NASA Astrophysics Data System (ADS)

    Solemslie, B. W.; Dahlhaug, O. G.

    2012-09-01

    The designs of hydraulic turbines are usually close kept corporation secrets. Therefore, the possibility of innovation and co-operation between different academic institutions regarding a specific turbine geometry is difficult. A Ph.D.-project at the Waterpower Laboratory, NTNU, aim to design several model Pelton turbines where all measurements, simulations, the design strategy, design software in addition to the physical model will be available to the public. In the following paper a short description of the methods and the test rig that are to be utilized in the project are described. The design will be based on empirical data and NURBS will be used as the descriptive method for the turbine geometry. In addition CFX and SPH simulations will be included in the design process. Each turbine designed and produced in connection to this project will be based on the experience and knowledge gained from the previous designs. The first design will be based on the philosophy to keep a near constant relative velocity through the bucket.

  16. New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2008-01-01

    A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.

  17. Integrating predictive information into an agro-economic model to guide agricultural planning

    NASA Astrophysics Data System (ADS)

    Block, Paul; Zhang, Ying; You, Liangzhi

    2017-04-01

    Seasonal climate forecasts can inform long-range planning, including water resources utilization and allocation, however quantifying the value of this information on the economy is often challenging. For rain-fed farmers, skillful season-ahead predictions may lead to superior planning, as compared to business as usual strategies, resulting in additional benefits or reduced losses. In this study, regional-level probabilistic precipitation forecasts of the major rainy season in Ethiopia are fed into an agro-economic model, adapted from the International Food Policy Research Institute, to evaluate economic outcomes (GDP, poverty rates, etc.) as compared with a no-forecast approach. Based on forecasted conditions, farmers can select various actions: adjusting crop area and crop type, purchasing drought resistant seed, or applying additional fertilizer. Preliminary results favor the forecast-based approach, particularly through crop area reallocation.

  18. Enlarged leukocyte referent libraries can explain additional variance in blood-based epigenome-wide association studies.

    PubMed

    Kim, Stephanie; Eliot, Melissa; Koestler, Devin C; Houseman, Eugene A; Wetmur, James G; Wiencke, John K; Kelsey, Karl T

    2016-09-01

    We examined whether variation in blood-based epigenome-wide association studies could be more completely explained by augmenting existing reference DNA methylation libraries. We compared existing and enhanced libraries in predicting variability in three publicly available 450K methylation datasets that collected whole-blood samples. Models were fit separately to each CpG site and used to estimate the additional variability when adjustments for cell composition were made with each library. Calculation of the mean difference in the CpG-specific residual sums of squares error between models for an arthritis, aging and metabolic syndrome dataset, indicated that an enhanced library explained significantly more variation across all three datasets (p < 10(-3)). Pathologically important immune cell subtypes can explain important variability in epigenome-wide association studies done in blood.

  19. Graph-based sensor fusion for classification of transient acoustic signals.

    PubMed

    Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal

    2015-03-01

    Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.

  20. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

Top