Sample records for additive model based

  1. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  2. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  3. A simulations approach for meta-analysis of genetic association studies based on additive genetic model.

    PubMed

    John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping

    2018-06-01

    Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.

  4. 3D model of filler melting with micro-beam plasma arc based on additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Yang, Tao; Yang, Ruixin

    2017-07-01

    Additive manufacturing technology is a systematic process based on discrete-accumulation principle, which is derived by the dimension of parts. Aiming at the dimension mathematical model and slicing problems in additive manufacturing process, the constitutive relations between micro-beam plasma welding parameters and the dimension of part were investigated. The slicing algorithm and slicing were also studied based on the dimension characteristics. By using the direct slicing algorithm according to the geometric characteristics of model, a hollow thin-wall spherical part was fabricated by 3D additive manufacturing technology using micro-beam plasma.

  5. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  6. Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.

  7. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  8. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  10. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  11. Versatility of Cooperative Transcriptional Activation: A Thermodynamical Modeling Analysis for Greater-Than-Additive and Less-Than-Additive Effects

    PubMed Central

    Frank, Till D.; Carmody, Aimée M.; Kholodenko, Boris N.

    2012-01-01

    We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive responses when

  12. An electrical circuit model for additive-modified SnO2 ceramics

    NASA Astrophysics Data System (ADS)

    Karami Horastani, Zahra; Alaei, Reza; Karami, Amirhossein

    2018-05-01

    In this paper an electrical circuit model for additive-modified metal oxide ceramics based on their physical structures and electrical resistivities is presented. The model predicts resistance of the sample at different additive concentrations and different temperatures. To evaluate the model two types of composite ceramics, SWCNT/SnO2 with SWCNT concentrations of 0.3, 0.6, 1.2, 2.4 and 3.8%wt, and Ag/SnO2 with Ag concentrations of 0.3, 0.5, 0.8 and 1.5%wt, were prepared and their electrical resistances versus temperature were experimentally measured. It is shown that the experimental data are in good agreement with the results obtained from the model. The proposed model can be used in the design process of ceramic-based gas sensors, and it also clarifies the role of additive in gas sensing process of additive-modified metal oxide gas sensors. Furthermore the model can be used in the system level modeling of designs in which these sensors are also present.

  13. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F { X ( t ), t } where F (·,·) is an unknown regression function and X ( t ) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F (·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X ( t ) is a signal from diffusion tensor imaging at position, t , along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  14. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  15. Genomic Model with Correlation Between Additive and Dominance Effects.

    PubMed

    Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres

    2018-05-09

    Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant

  16. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  17. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  18. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  19. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  20. Research on Capacity Addition using Market Model with Transmission Congestion under Competitive Environment

    NASA Astrophysics Data System (ADS)

    Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko

    In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.

  1. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  2. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  3. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  4. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE PAGES

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.; ...

    2017-09-22

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  5. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  6. Pedigree-based estimation of covariance between dominance deviations and additive genetic effects in closed rabbit lines considering inbreeding and using a computationally simpler equivalent model.

    PubMed

    Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M

    2017-06-01

    Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.

  7. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  8. An original traffic additional emission model and numerical simulation on a signalized road

    NASA Astrophysics Data System (ADS)

    Zhu, Wen-Xing; Zhang, Jing-Yu

    2017-02-01

    Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.

  9. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  10. Process Modeling and Validation for Metal Big Area Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W.

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology based on the metal arc welding. A continuously fed metal wire is melted by an electric arc that forms between the wire and the substrate, and deposited in the form of a bead of molten metal along the predetermined path. Objects are manufactured one layer at a time starting from the base plate. The final properties of the manufactured object are dependent on its geometry and the metal deposition path, in addition to depending on the basic welding process parameters. Computational modeling can be used to acceleratemore » the development of the mBAAM technology as well as a design and optimization tool for the actual manufacturing process. We have developed a finite element method simulation framework for mBAAM using the new features of software ABAQUS. The computational simulation of material deposition with heat transfer is performed first, followed by the structural analysis based on the temperature history for predicting the final deformation and stress state. In this formulation, we assume that two physics phenomena are coupled in only one direction, i.e. the temperatures are driving the deformation and internal stresses, but their feedback on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less

  11. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    PubMed

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  12. The prediction of food additives in the fruit juice based on electronic nose with chemometrics.

    PubMed

    Qiu, Shanshan; Wang, Jun

    2017-09-01

    Food additives are added to products to enhance their taste, and preserve flavor or appearance. While their use should be restricted to achieve a technological benefit, the contents of food additives should be also strictly controlled. In this study, E-nose was applied as an alternative to traditional monitoring technologies for determining two food additives, namely benzoic acid and chitosan. For quantitative monitoring, support vector machine (SVM), random forest (RF), extreme learning machine (ELM) and partial least squares regression (PLSR) were applied to establish regression models between E-nose signals and the amount of food additives in fruit juices. The monitoring models based on ELM and RF reached higher correlation coefficients (R 2 s) and lower root mean square errors (RMSEs) than models based on PLSR and SVM. This work indicates that E-nose combined with RF or ELM can be a cost-effective, easy-to-build and rapid detection system for food additive monitoring. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  14. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  15. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  16. Ridge, Lasso and Bayesian additive-dominance genomic models.

    PubMed

    Azevedo, Camila Ferreira; de Resende, Marcos Deon Vilela; E Silva, Fabyano Fonseca; Viana, José Marcelo Soriano; Valente, Magno Sávio Ferreira; Resende, Márcio Fernando Ribeiro; Muñoz, Patricio

    2015-08-25

    A complete approach for genome-wide selection (GWS) involves reliable statistical genetics models and methods. Reports on this topic are common for additive genetic models but not for additive-dominance models. The objective of this paper was (i) to compare the performance of 10 additive-dominance predictive models (including current models and proposed modifications), fitted using Bayesian, Lasso and Ridge regression approaches; and (ii) to decompose genomic heritability and accuracy in terms of three quantitative genetic information sources, namely, linkage disequilibrium (LD), co-segregation (CS) and pedigree relationships or family structure (PR). The simulation study considered two broad sense heritability levels (0.30 and 0.50, associated with narrow sense heritabilities of 0.20 and 0.35, respectively) and two genetic architectures for traits (the first consisting of small gene effects and the second consisting of a mixed inheritance model with five major genes). G-REML/G-BLUP and a modified Bayesian/Lasso (called BayesA*B* or t-BLASSO) method performed best in the prediction of genomic breeding as well as the total genotypic values of individuals in all four scenarios (two heritabilities x two genetic architectures). The BayesA*B*-type method showed a better ability to recover the dominance variance/additive variance ratio. Decomposition of genomic heritability and accuracy revealed the following descending importance order of information: LD, CS and PR not captured by markers, the last two being very close. Amongst the 10 models/methods evaluated, the G-BLUP, BAYESA*B* (-2,8) and BAYESA*B* (4,6) methods presented the best results and were found to be adequate for accurately predicting genomic breeding and total genotypic values as well as for estimating additive and dominance in additive-dominance genomic models.

  17. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  18. Assessing non-additive effects in GBLUP model.

    PubMed

    Vieira, I C; Dos Santos, J P R; Pires, L P M; Lima, B M; Gonçalves, F M A; Balestre, M

    2017-05-10

    Understanding non-additive effects in the expression of quantitative traits is very important in genotype selection, especially in species where the commercial products are clones or hybrids. The use of molecular markers has allowed the study of non-additive genetic effects on a genomic level, in addition to a better understanding of its importance in quantitative traits. Thus, the purpose of this study was to evaluate the behavior of the GBLUP model in different genetic models and relationship matrices and their influence on the estimates of genetic parameters. We used real data of the circumference at breast height in Eucalyptus spp and simulated data from a population of F 2 . Three commonly reported kinship structures in the literature were adopted. The simulation results showed that the inclusion of epistatic kinship improved prediction estimates of genomic breeding values. However, the non-additive effects were not accurately recovered. The Fisher information matrix for real dataset showed high collinearity in estimates of additive, dominant, and epistatic variance, causing no gain in the prediction of the unobserved data and convergence problems. Estimates presented differences of genetic parameters and correlations considering the different kinship structures. Our results show that the inclusion of non-additive effects can improve the predictive ability or even the prediction of additive effects. However, the high distortions observed in the variance estimates when the Hardy-Weinberg equilibrium assumption is violated due to the presence of selection or inbreeding can converge at zero gains in models that consider epistasis in genomic kinship.

  19. Numerical simulation of residual stress in laser based additive manufacturing process

    NASA Astrophysics Data System (ADS)

    Kalyan Panda, Bibhu; Sahoo, Seshadev

    2018-03-01

    Minimizing the residual stress build-up in metal-based additive manufacturing plays a pivotal role in selecting a particular material and technique for making an industrial part. In beam-based additive manufacturing, although a great deal of effort has been made to minimize the residual stresses, it is still elusive how to do so by simply optimizing the processing parameters, such as beam size, beam power, and scan speed. Amid different types of additive manufacturing processes, Direct Metal Laser Sintering (DMLS) process uses a high-power laser to melt and sinter layers of metal powder. The rapid solidification and heat transfer on powder bed endows a high cooling rate which leads to the build-up of residual stresses, that will affect the mechanical properties of the build parts. In the present work, the authors develop a numerical thermo-mechanical model for the measurement of residual stress in the AlSi10Mg build samples by using finite element method. Transient temperature distribution in the powder bed was assessed using the coupled thermal to structural model. Subsequently, the residual stresses were estimated with varying laser power. From the simulation result, it found that the melt pool dimensions increase with increasing the laser power and the magnitude of residual stresses in the built part increases.

  20. Water based drilling mud additive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrary, J.L.

    1983-12-13

    A water based fluid additive useful in drilling mud used during drilling of an oil or gas well is disclosed, produced by reacting water at temperatures between 210/sup 0/-280/sup 0/ F. with a mixture comprising in percent by weight: gilsonite 25-30%, tannin 7-15%, lignite 25-35%, sulfonating compound 15-25%, water soluble base compound 5-15%, methylene-yielding compound 1-5%, and then removing substantially all of the remaining water to produce a dried product.

  1. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  2. Modeling the Impact of School-based Universal Depression Screening on Additional Service Capacity Needs: A System Dynamics Approach

    PubMed Central

    Lyon, Aaron R.; Maras, Melissa A.; Pate, Christina M.; Igusa, Takeru; Stoep, Ann Vander

    2016-01-01

    Although it is widely known that the occurrence of depression increases over the course of adolescence, symptoms of mood disorders frequently go undetected. While schools are viable settings for conducting universal screening to systematically identify students in need of services for common health conditions, particularly those that adversely affect school performance, few school districts routinely screen their students for depression. Among the most commonly referenced barriers are concerns that the number of students identified may exceed schools’ service delivery capacities, but few studies have evaluated this concern systematically. System dynamics (SD) modeling may prove a useful approach for answering questions of this sort. The goal of the current paper is therefore to demonstrate how SD modeling can be applied to inform implementation decisions in communities. In our demonstration, we used SD modeling to estimate the additional service demand generated by universal depression screening in a typical high school. We then simulated the effects of implementing “compensatory approaches” designed to address anticipated increases in service need through (1) the allocation of additional staff time and (2) improvements in the effectiveness of mental health interventions. Results support the ability of screening to facilitate more rapid entry into services and suggest that improving the effectiveness of mental health services for students with depression via the implementation of an evidence-based treatment protocol may have a limited impact on overall recovery rates and service availability. In our example, the SD approach proved useful in informing systems’ decision-making about the adoption of a new school mental health service. PMID:25601192

  3. Generalised additive modelling approach to the fermentation process of glutamate.

    PubMed

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  4. Metal Big Area Additive Manufacturing: Process Modeling and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long rangemore » effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress

  5. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template.

    PubMed

    Jewkes, Rachel; Burton, Hanna E; Espino, Daniel M

    2018-02-02

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  6. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    PubMed Central

    Jewkes, Rachel; Burton, Hanna E.; Espino, Daniel M.

    2018-01-01

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries. PMID:29393899

  7. Linking livestock snow disaster mortality and environmental stressors in the Qinghai-Tibetan Plateau: Quantification based on generalized additive models.

    PubMed

    Li, Yijia; Ye, Tao; Liu, Weihang; Gao, Yu

    2018-06-01

    Livestock snow disaster occurs widely in Central-to-Eastern Asian temperate and alpine grasslands. The effects of snow disaster on livestock involve a complex interaction between precipitation, vegetation, livestock, and herder communities. Quantifying the relationship among livestock mortality, snow hazard intensity, and seasonal environmental stressors is of great importance for snow disaster early warning, risk assessments, and adaptation strategies. Using a wide-spatial extent, long-time series, and event-based livestock snow disaster dataset, this study quantified those relationships and established a quantitative model of livestock mortality for prediction purpose for the Qinghai-Tibet Plateau region. Estimations using generalized additive models (GAMs) were shown to accurately predict livestock mortality and mortality rate due to snow disaster, with adjusted-R 2 up to 0.794 and 0.666, respectively. These results showed that a longer snow disaster duration, lower temperatures during the disaster, and a drier summer with less vegetation all contribute significantly and non-linearly to higher mortality (rate), after controlling for elevation and socioeconomic conditions. These results can be readily applied to risk assessment and risk-based adaptation actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Sustainability issues in laser-based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Sreenivasan, R.; Goel, A.; Bourell, D. L.

    Sustainability is a consideration of resource utilization without depletion or adverse environmental impact. In manufacturing, important sustainability issues include energy consumption, waste generation, water usage and the environmental impact of the manufactured part in service. This paper deals with three aspects of sustainability as it applies to additive manufacturing. First is a review of the research needs for energy and sustainability as applied to additive manufacturing based on the 2009 Roadmap for Additive Manufacturing Workshop. The second part is an energy assessment for selective laser sintering (SLS) of polymers. Using polyamide powder in a 3D Systems Vanguard HiQ Sinterstation, energy loss during a build was measured due to the chamber heaters, the roller mechanism, the piston elevators and the laser. This accounted for 95% of the total energy consumption. An overall energy assessment was accomplished using eco-indicators. The last topic is electrochemical deposition of porous SLS non-polymeric preforms. The goal is to reduce energy consumption in SLS of non-polymeric materials. The approach was to mix a transient binder with the material, to create an SLS green part, to convert the binder, and then to remove the open, connected porosity and to densify the part by chemical deposition at room temperature within the pore network. The model system was silicon carbide powder mixed with a phenolic transient binder coupled with electrolytic deposition of nickel. Deposition was facilitated by inserting a conductive graphite cathode in the part center to draw the positive nickel ions through the interconnected porous network and to deposit them on the pore walls. The Roadmap for Additive Manufacturing Workshop was sponsored by the National Science Foundation under Grant CMMI-0906212 and by the Office of Naval Research under Grant N00014-09-1-0558. The electrolytic deposition research was sponsored by the National Science Foundation, Grant CMMI-0926316.

  9. Implementation of Complexity Analyzing Based on Additional Effect

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Li, Na; Liang, Yanhong; Liu, Fang

    According to the Complexity Theory, there is complexity in the system when the functional requirement is not be satisfied. There are several study performances for Complexity Theory based on Axiomatic Design. However, they focus on reducing the complexity in their study and no one focus on method of analyzing the complexity in the system. Therefore, this paper put forth a method of analyzing the complexity which is sought to make up the deficiency of the researches. In order to discussing the method of analyzing the complexity based on additional effect, this paper put forth two concepts which are ideal effect and additional effect. The method of analyzing complexity based on additional effect combines Complexity Theory with Theory of Inventive Problem Solving (TRIZ). It is helpful for designers to analyze the complexity by using additional effect. A case study shows the application of the process.

  10. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  11. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  12. Adsorption of molecular additive onto lead halide perovskite surfaces: A computational study on Lewis base thiophene additive passivation

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yu, Fengxi; Chen, Lihong; Li, Jingfa

    2018-06-01

    Organic additives, such as the Lewis base thiophene, have been successfully applied to passivate halide perovskite surfaces, improving the stability and properties of perovskite devices based on CH3NH3PbI3. Yet, the detailed nanostructure of the perovskite surface passivated by additives and the mechanisms of such passivation are not well understood. This study presents a nanoscopic view on the interfacial structure of an additive/perovskite interface, consisting of a Lewis base thiophene molecular additive and a lead halide perovskite surface substrate, providing insights on the mechanisms that molecular additives can passivate the halide perovskite surfaces and enhance the perovskite-based device performance. Molecular dynamics study on the interactions between water molecules and the perovskite surfaces passivated by the investigated additive reveal the effectiveness of employing the molecular additives to improve the stability of the halide perovskite materials. The additive/perovskite surface system is further probed via molecular engineering the perovskite surfaces. This study reveals the nanoscopic structure-property relationships of the halide perovskite surface passivated by molecular additives, which helps the fundamental understanding of the surface/interface engineering strategies for the development of halide perovskite based devices.

  13. a New Multi-Criteria Evaluation Model Based on the Combination of Non-Additive Fuzzy Ahp, Choquet Integral and Sugeno λ-MEASURE

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.

    2017-09-01

    This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.

  14. Polysulfide and bio-based EP additive performance in vegetable vs. paraffinic base oils

    USDA-ARS?s Scientific Manuscript database

    Twist compression test (TCT) and 4-ball extreme pressure (EP) methods were used to investigate commercial polysulfide (PS) and bio-based polyester (PE) EP additives in paraffinic (150N) and refined soybean (SOY) base oils of similar viscosity. Binary blends of EP additive and base oil were investiga...

  15. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    PubMed Central

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  16. Effects of additional food in a delayed predator-prey model.

    PubMed

    Sahoo, Banshidhar; Poria, Swarup

    2015-03-01

    We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Additivity of Feature-Based and Symmetry-Based Grouping Effects in Multiple Object Tracking

    PubMed Central

    Wang, Chundi; Zhang, Xuemin; Li, Yongna; Lyu, Chuang

    2016-01-01

    Multiple object tracking (MOT) is an attentional process wherein people track several moving targets among several distractors. Symmetry, an important indicator of regularity, is a general spatial pattern observed in natural and artificial scenes. According to the “laws of perceptual organization” proposed by Gestalt psychologists, regularity is a principle of perceptual grouping, such as similarity and closure. A great deal of research reported that feature-based similarity grouping (e.g., grouping based on color, size, or shape) among targets in MOT tasks can improve tracking performance. However, no additive feature-based grouping effects have been reported where the tracking objects had two or more features. “Additive effect” refers to a greater grouping effect produced by grouping based on multiple cues instead of one cue. Can spatial symmetry produce a similar grouping effect similar to that of feature similarity in MOT tasks? Are the grouping effects based on symmetry and feature similarity additive? This study includes four experiments to address these questions. The results of Experiments 1 and 2 demonstrated the automatic symmetry-based grouping effects. More importantly, an additive grouping effect of symmetry and feature similarity was observed in Experiments 3 and 4. Our findings indicate that symmetry can produce an enhanced grouping effect in MOT and facilitate the grouping effect based on color or shape similarity. The “where” and “what” pathways might have played an important role in the additive grouping effect. PMID:27199875

  18. 3D printed microfluidic circuitry via multijet-based additive manufacturing†

    PubMed Central

    Sochol, R. D.; Sweet, E.; Glick, C. C.; Venkatesh, S.; Avetisyan, A.; Ekman, K. F.; Raulinaitis, A.; Tsai, A.; Wienkers, A.; Korner, K.; Hanson, K.; Long, A.; Hightower, B. J.; Slatton, G.; Burnett, D. C.; Massey, T. L.; Iwai, K.; Lee, L. P.; Pister, K. S. J.; Lin, L.

    2016-01-01

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or “three-dimensional (3D) printing” technologies – predominantly stereolithography – as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) – a layer-by-layer, multi-material inkjetting process – for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. PMID:26725379

  19. Boosted structured additive regression for Escherichia coli fed-batch fermentation modeling.

    PubMed

    Melcher, Michael; Scharl, Theresa; Luchner, Markus; Striedner, Gerald; Leisch, Friedrich

    2017-02-01

    The quality of biopharmaceuticals and patients' safety are of highest priority and there are tremendous efforts to replace empirical production process designs by knowledge-based approaches. Main challenge in this context is that real-time access to process variables related to product quality and quantity is severely limited. To date comprehensive on- and offline monitoring platforms are used to generate process data sets that allow for development of mechanistic and/or data driven models for real-time prediction of these important quantities. Ultimate goal is to implement model based feed-back control loops that facilitate online control of product quality. In this contribution, we explore structured additive regression (STAR) models in combination with boosting as a variable selection tool for modeling the cell dry mass, product concentration, and optical density on the basis of online available process variables and two-dimensional fluorescence spectroscopic data. STAR models are powerful extensions of linear models allowing for inclusion of smooth effects or interactions between predictors. Boosting constructs the final model in a stepwise manner and provides a variable importance measure via predictor selection frequencies. Our results show that the cell dry mass can be modeled with a relative error of about ±3%, the optical density with ±6%, the soluble protein with ±16%, and the insoluble product with an accuracy of ±12%. Biotechnol. Bioeng. 2017;114: 321-334. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Modelling the behaviour of additives in gun barrels

    NASA Astrophysics Data System (ADS)

    Rhodes, N.; Ludwig, J. C.

    1986-01-01

    A mathematical model which predicts the flow and heat transfer in a gun barrel is described. The model is transient, two-dimensional and equations are solved for velocities and enthalpies of a gas phase, which arises from the combustion of propellant and cartridge case, for particle additives which are released from the case; volume fractions of the gas and particles. Closure of the equations is obtained using a two-equation turbulence model. Preliminary calculations are described in which the proportions of particle additives in the cartridge case was altered. The model gives a good prediction of the ballistic performance and the gas to wall heat transfer. However, the expected magnitude of reduction in heat transfer when particles are present is not predicted. The predictions of gas flow invalidate some of the assumptions made regarding case and propellant behavior during combustion and further work is required to investigate these effects and other possible interactions, both chemical and physical, between gas and particles.

  1. Chemical Mixture Risk Assessment Additivity-Based Approaches

    EPA Science Inventory

    Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.

  2. Ground-Based Telescope Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  3. Detergent-dispersant additives based on high-molecular-weight alkylphenols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulieva, K.N.; Namazova, I.I.; Ismailova, N.D.

    1988-09-01

    This article describes the synthesis and investigation of Mannich bases produced for alkylphenols, obtained in turn from ethylene oligomers. These oligomers are the still bottoms from distillation products of high-temperature oligomerization of ethylene in the presence of triethylaluminum. Two narrow cuts obtained from the distillation of oligomer fraction were used to study the influence of ethylene oligomer molecular weight on the properties of the additives. The additives were blended in DS-11 oil to evaluate their detergency-dispersancy and other properties. Comparison blends were made with succinimide additives based on the same ethylene oligomers. The Mannich bases give improvements in the oxidationmore » resistance, anticorrosion properties, and detergency-dispersancy of the DS-11 diesel oil.« less

  4. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  5. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    PubMed

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  6. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. [Modeling in value-based medicine].

    PubMed

    Neubauer, A S; Hirneiss, C; Kampik, A

    2010-03-01

    Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.

  8. A novel model for through-silicon via (TSV) filling process simulation considering three additives and current density effect

    NASA Astrophysics Data System (ADS)

    Wang, Fuliang; Zhao, Zhipeng; Wang, Feng; Wang, Yan; Nie, Nantian

    2017-12-01

    Through-silicon via (TSV) filling by electrochemical deposition is still a challenge for 3D IC packaging, and three-component additive systems (accelerator, suppressor, and leveler) were commonly used in the industry to achieve void-free filling. However, models considering three additive systems and the current density effect have not been fully studied. In this paper, a novel three-component model was developed to study the TSV filling mechanism and process, where the interaction behavior of the three additives (accelerator, suppressor, and leveler) were considered, and the adsorption, desorption, and consumption coefficient of the three additives were changed with the current density. Based on this new model, the three filling types (seam void, ‘V’ shape, and key hole) were simulated under different current density conditions, and the filling results were verified by experiments. The effect of the current density on the copper ion concentration, additives surface coverage, and local current density distribution during the TSV filling process were obtained. Based on the simulation and experimental results, the diffusion-adsorption-desorption-consumption competition behavior between the suppressor, the accelerator, and the leveler were discussed. The filling mechanisms under different current densities were also analyzed.

  9. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.

  10. Applying Additive Hazards Models for Analyzing Survival in Patients with Colorectal Cancer in Fars Province, Southern Iran

    PubMed

    Madadizadeh, Farzan; Ghanbarnejad, Amin; Ghavami, Vahid; Zare Bandamiri, Mohammad; Mohammadianpanah, Mohammad

    2017-04-01

    Introduction: Colorectal cancer (CRC) is a commonly fatal cancer that ranks as third worldwide and third and the fifth in Iranian women and men, respectively. There are several methods for analyzing time to event data. Additive hazards regression models take priority over the popular Cox proportional hazards model if the absolute hazard (risk) change instead of hazard ratio is of primary concern, or a proportionality assumption is not made. Methods: This study used data gathered from medical records of 561 colorectal cancer patients who were admitted to Namazi Hospital, Shiraz, Iran, during 2005 to 2010 and followed until December 2015. The nonparametric Aalen’s additive hazards model, semiparametric Lin and Ying’s additive hazards model and Cox proportional hazards model were applied for data analysis. The proportionality assumption for the Cox model was evaluated with a test based on the Schoenfeld residuals and for test goodness of fit in additive models, Cox-Snell residual plots were used. Analyses were performed with SAS 9.2 and R3.2 software. Results: The median follow-up time was 49 months. The five-year survival rate and the mean survival time after cancer diagnosis were 59.6% and 68.1±1.4 months, respectively. Multivariate analyses using Lin and Ying’s additive model and the Cox proportional model indicated that the age of diagnosis, site of tumor, stage, and proportion of positive lymph nodes, lymphovascular invasion and type of treatment were factors affecting survival of the CRC patients. Conclusion: Additive models are suitable alternatives to the Cox proportionality model if there is interest in evaluation of absolute hazard change, or no proportionality assumption is made. Creative Commons Attribution License

  11. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers

    PubMed Central

    2011-01-01

    Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to

  12. Haptics-based dynamic implicit solid modeling.

    PubMed

    Hua, Jing; Qin, Hong

    2004-01-01

    This paper systematically presents a novel, interactive solid modeling framework, Haptics-based Dynamic Implicit Solid Modeling, which is founded upon volumetric implicit functions and powerful physics-based modeling. In particular, we augment our modeling framework with a haptic mechanism in order to take advantage of additional realism associated with a 3D haptic interface. Our dynamic implicit solids are semi-algebraic sets of volumetric implicit functions and are governed by the principles of dynamics, hence responding to sculpting forces in a natural and predictable manner. In order to directly manipulate existing volumetric data sets as well as point clouds, we develop a hierarchical fitting algorithm to reconstruct and represent discrete data sets using our continuous implicit functions, which permit users to further design and edit those existing 3D models in real-time using a large variety of haptic and geometric toolkits, and visualize their interactive deformation at arbitrary resolution. The additional geometric and physical constraints afford more sophisticated control of the dynamic implicit solids. The versatility of our dynamic implicit modeling enables the user to easily modify both the geometry and the topology of modeled objects, while the inherent physical properties can offer an intuitive haptic interface for direct manipulation with force feedback.

  13. Geo-additive modelling of malaria in Burundi

    PubMed Central

    2011-01-01

    Background Malaria is a major public health issue in Burundi in terms of both morbidity and mortality, with around 2.5 million clinical cases and more than 15,000 deaths each year. It is still the single main cause of mortality in pregnant women and children below five years of age. Because of the severe health and economic burden of malaria, there is still a growing need for methods that will help to understand the influencing factors. Several studies/researches have been done on the subject yielding different results as which factors are most responsible for the increase in malaria transmission. This paper considers the modelling of the dependence of malaria cases on spatial determinants and climatic covariates including rainfall, temperature and humidity in Burundi. Methods The analysis carried out in this work exploits real monthly data collected in the area of Burundi over 12 years (1996-2007). Semi-parametric regression models are used. The spatial analysis is based on a geo-additive model using provinces as the geographic units of study. The spatial effect is split into structured (correlated) and unstructured (uncorrelated) components. Inference is fully Bayesian and uses Markov chain Monte Carlo techniques. The effects of the continuous covariates are modelled by cubic p-splines with 20 equidistant knots and second order random walk penalty. For the spatially correlated effect, Markov random field prior is chosen. The spatially uncorrelated effects are assumed to be i.i.d. Gaussian. The effects of climatic covariates and the effects of other spatial determinants are estimated simultaneously in a unified regression framework. Results The results obtained from the proposed model suggest that although malaria incidence in a given month is strongly positively associated with the minimum temperature of the previous months, regional patterns of malaria that are related to factors other than climatic variables have been identified, without being able to explain

  14. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models.

    PubMed

    Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco

    2018-01-01

    Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.

  15. Evaluation of 3D Additively Manufactured Canine Brain Models for Teaching Veterinary Neuroanatomy.

    PubMed

    Schoenfeld-Tacher, Regina M; Horn, Timothy J; Scheviak, Tyler A; Royal, Kenneth D; Hudson, Lola C

    Physical specimens are essential to the teaching of veterinary anatomy. While fresh and fixed cadavers have long been the medium of choice, plastinated specimens have gained widespread acceptance as adjuncts to dissection materials. Even though the plastination process increases the durability of specimens, these are still derived from animal tissues and require periodic replacement if used by students on a regular basis. This study investigated the use of three-dimensional additively manufactured (3D AM) models (colloquially referred to as 3D-printed models) of the canine brain as a replacement for plastinated or formalin-fixed brains. The models investigated were built based on a micro-MRI of a single canine brain and have numerous practical advantages, such as durability, lower cost over time, and reduction of animal use. The effectiveness of the models was assessed by comparing performance among students who were instructed using either plastinated brains or 3D AM models. This study used propensity score matching to generate similar pairs of students. Pairings were based on gender and initial anatomy performance across two consecutive classes of first-year veterinary students. Students' performance on a practical neuroanatomy exam was compared, and no significant differences were found in scores based on the type of material (3D AM models or plastinated specimens) used for instruction. Students in both groups were equally able to identify neuroanatomical structures on cadaveric material, as well as respond to questions involving application of neuroanatomy knowledge. Therefore, we postulate that 3D AM canine brain models are an acceptable alternative to plastinated specimens in teaching veterinary neuroanatomy.

  16. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  17. Additives for cement compositions based on modified peat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopanitsa, Natalya, E-mail: kopanitsa@mail.ru; Sarkisov, Yurij, E-mail: sarkisov@tsuab.ru; Gorshkova, Aleksandra, E-mail: kasatkina.alexandra@gmail.com

    High quality competitive dry building mixes require modifying additives for various purposes to be included in their composition. There is insufficient amount of quality additives having stable properties for controlling the properties of cement compositions produced in Russia. Using of foreign modifying additives leads to significant increasing of the final cost of the product. The cost of imported modifiers in the composition of the dry building mixes can be up to 90% of the material cost, depending on the composition complexity. Thus, the problem of import substitution becomes relevant, especially in recent years, due to difficult economic situation. The articlemore » discusses the possibility of using local raw materials as a basis for obtaining dry building mixtures components. The properties of organo-mineral additives for cement compositions based on thermally modified peat raw materials are studied. Studies of the structure and composition of the additives are carried out by physicochemical research methods: electron microscopy and X-ray analysis. Results of experimental research showed that the peat additives contribute to improving of cement-sand mortar strength and hydrophysical properties.« less

  18. Unraveling additive from nonadditive effects using genomic relationship matrices.

    PubMed

    Muñoz, Patricio R; Resende, Marcio F R; Gezan, Salvador A; Resende, Marcos Deon Vilela; de Los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F

    2014-12-01

    The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. Copyright © 2014 by the Genetics Society of America.

  19. An overview of TOUGH-based geomechanics models

    DOE PAGES

    Rutqvist, Jonny

    2016-09-22

    After the initial development of the first TOUGH-based geomechanics model 15 years ago based on linking TOUGH2 multiphase flow simulator to the FLAC3D geomechanics simulator, at least 15 additional TOUGH-based geomechanics models have appeared in the literature. This development has been fueled by a growing demand and interest for modeling coupled multiphase flow and geomechanical processes related to a number of geoengineering applications, such as in geologic CO 2 sequestration, enhanced geothermal systems, unconventional hydrocarbon production, and most recently, related to reservoir stimulation and injection-induced seismicity. This paper provides a short overview of these TOUGH-based geomechanics models, focusing on somemore » of the most frequently applied to a diverse set of problems associated with geomechanics and its couplings to hydraulic, thermal and chemical processes.« less

  20. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  1. Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie

    1992-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.

  2. Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1991-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.

  3. Enhancements to the KATE model-based reasoning system

    NASA Technical Reports Server (NTRS)

    Thomas, Stan J.

    1994-01-01

    KATE (Knowledge-based Autonomous Test Engineer) is a model-based software system developed in the Artificial Intelligence Laboratory at the Kennedy Space Center for monitoring, fault detection, and control of launch vehicles and ground support systems. This report describes two software efforts which enhance the functionality and usability of KATE. The first addition, a flow solver, adds to KATE a tool for modeling the flow of liquid in a pipe system. The second addition adds support for editing KATE knowledge base files to the Emacs editor. The body of this report discusses design and implementation issues having to do with these two tools. It will be useful to anyone maintaining or extending either the flow solver or the editor enhancements.

  4. Unified Modeling Language (UML) for hospital-based cancer registration processes.

    PubMed

    Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi

    2008-01-01

    Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a

  5. Biocompatibility of hydroxyapatite scaffolds processed by lithography-based additive manufacturing.

    PubMed

    Tesavibul, Passakorn; Chantaweroad, Surapol; Laohaprapanon, Apinya; Channasanon, Somruethai; Uppanan, Paweena; Tanodekaew, Siriporn; Chalermkarnnon, Prasert; Sitthiseripratip, Kriskrai

    2015-01-01

    The fabrication of hydroxyapatite scaffolds for bone tissue engineering applications by using lithography-based additive manufacturing techniques has been introduced due to the abilities to control porous structures with suitable resolutions. In this research, the use of hydroxyapatite cellular structures, which are processed by lithography-based additive manufacturing machine, as a bone tissue engineering scaffold was investigated. The utilization of digital light processing system for additive manufacturing machine in laboratory scale was performed in order to fabricate the hydroxyapatite scaffold, of which biocompatibilities were eventually evaluated by direct contact and cell-culturing tests. In addition, the density and compressive strength of the scaffolds were also characterized. The results show that the hydroxyapatite scaffold at 77% of porosity with 91% of theoretical density and 0.36 MPa of the compressive strength are able to be processed. In comparison with a conventionally sintered hydroxyapatite, the scaffold did not present any cytotoxic signs while the viability of cells at 95.1% was reported. After 14 days of cell-culturing tests, the scaffold was able to be attached by pre-osteoblasts (MC3T3-E1) leading to cell proliferation and differentiation. The hydroxyapatite scaffold for bone tissue engineering was able to be processed by the lithography-based additive manufacturing machine while the biocompatibilities were also confirmed.

  6. Microstructure of a base metal thick film system. [Glass frit with base metal oxide addition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mentley, D.E.

    1976-06-01

    A base metal thick film conductor system using glass frits with base metal oxide additions was investigated as metallization for hybrid microcircuits. Application of previous work on wetting and chemical bonding was made to this system. The observation of changes in the properties of the thick film was made by photomicrographs of screened samples and sheet resistivity measurements. In addition to the chemical and wetting properties, the effect of glass frit particle size on conductivity was also analyzed. The base metal oxide addition was found to produce a more consistent thick film conductor at low volume percentages of metal bymore » inhibiting the formation of low melting redox reaction products.« less

  7. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  8. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  9. GenoGAM: genome-wide generalized additive models for ChIP-Seq analysis.

    PubMed

    Stricker, Georg; Engelhardt, Alexander; Schulz, Daniel; Schmid, Matthias; Tresch, Achim; Gagneur, Julien

    2017-08-01

    Chromatin immunoprecipitation followed by deep sequencing (ChIP-Seq) is a widely used approach to study protein-DNA interactions. Often, the quantities of interest are the differential occupancies relative to controls, between genetic backgrounds, treatments, or combinations thereof. Current methods for differential occupancy of ChIP-Seq data rely however on binning or sliding window techniques, for which the choice of the window and bin sizes are subjective. Here, we present GenoGAM (Genome-wide Generalized Additive Model), which brings the well-established and flexible generalized additive models framework to genomic applications using a data parallelism strategy. We model ChIP-Seq read count frequencies as products of smooth functions along chromosomes. Smoothing parameters are objectively estimated from the data by cross-validation, eliminating ad hoc binning and windowing needed by current approaches. GenoGAM provides base-level and region-level significance testing for full factorial designs. Application to a ChIP-Seq dataset in yeast showed increased sensitivity over existing differential occupancy methods while controlling for type I error rate. By analyzing a set of DNA methylation data and illustrating an extension to a peak caller, we further demonstrate the potential of GenoGAM as a generic statistical modeling tool for genome-wide assays. Software is available from Bioconductor: https://www.bioconductor.org/packages/release/bioc/html/GenoGAM.html . gagneur@in.tum.de. Supplementary information is available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  11. Modeling process-structure-property relationships for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  12. Bayesian structured additive regression modeling of epidemic data: application to cholera

    PubMed Central

    2012-01-01

    Background A significant interest in spatial epidemiology lies in identifying associated risk factors which enhances the risk of infection. Most studies, however, make no, or limited use of the spatial structure of the data, as well as possible nonlinear effects of the risk factors. Methods We develop a Bayesian Structured Additive Regression model for cholera epidemic data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (MCMC) simulations. The model is applied to cholera epidemic data in the Kumasi Metropolis, Ghana. Proximity to refuse dumps, density of refuse dumps, and proximity to potential cholera reservoirs were modeled as continuous functions; presence of slum settlers and population density were modeled as fixed effects, whereas spatial references to the communities were modeled as structured and unstructured spatial effects. Results We observe that the risk of cholera is associated with slum settlements and high population density. The risk of cholera is equal and lower for communities with fewer refuse dumps, but variable and higher for communities with more refuse dumps. The risk is also lower for communities distant from refuse dumps and potential cholera reservoirs. The results also indicate distinct spatial variation in the risk of cholera infection. Conclusion The study highlights the usefulness of Bayesian semi-parametric regression model analyzing public health data. These findings could serve as novel information to help health planners and policy makers in making effective decisions to control or prevent cholera epidemics. PMID:22866662

  13. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  14. Evaluating cardiovascular mortality in type 2 diabetes patients: an analysis based on competing risks Markov chains and additive regression models.

    PubMed

    Rosato, Rosalba; Ciccone, G; Bo, S; Pagano, G F; Merletti, F; Gregori, D

    2007-06-01

    Type 2 diabetes represents a condition significantly associated with increased cardiovascular mortality. The aims of the study are: (i) to estimate the cumulative incidence function for cause-specific mortality using Cox and Aalen model; (ii) to describe how the prediction of cardiovascular or other causes mortality changes for patients with different pattern of covariates; (iii) to show if different statistical methods may give different results. Cox and Aalen additive regression model through the Markov chain approach, are used to estimate the cause-specific hazard for cardiovascular or other causes mortality in a cohort of 2865 type 2 diabetic patients without insulin treatment. The models are compared in the estimation of the risk of death for patients of different severity. For younger patients with a better covariates profile, the Cumulative Incidence Function estimated by Cox and Aalen model was almost the same; for patients with the worst covariates profile, models gave different results: at the end of follow-up cardiovascular mortality rate estimated by Cox and Aalen model was 0.26 [95% confidence interval (CI) = 0.21-0.31] and 0.14 (95% CI = 0.09-0.18). Standard Cox and Aalen model capture the risk process for patients equally well with average profiles of co-morbidities. The Aalen model, in addition, is shown to be better at identifying cause-specific risk of death for patients with more severe clinical profiles. This result is relevant in the development of analytic tools for research and resource management within diabetes care.

  15. Modeling of Ti-W Solidification Microstructures Under Additive Manufacturing Conditions

    NASA Astrophysics Data System (ADS)

    Rolchigo, Matthew R.; Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian; Collins, Peter C.; LeSar, Richard

    2017-07-01

    Additive manufacturing (AM) processes have many benefits for the fabrication of alloy parts, including the potential for greater microstructural control and targeted properties than traditional metallurgy processes. To accelerate utilization of this process to produce such parts, an effective computational modeling approach to identify the relationships between material and process parameters, microstructure, and part properties is essential. Development of such a model requires accounting for the many factors in play during this process, including laser absorption, material addition and melting, fluid flow, various modes of heat transport, and solidification. In this paper, we start with a more modest goal, to create a multiscale model for a specific AM process, Laser Engineered Net Shaping (LENS™), which couples a continuum-level description of a simplified beam melting problem (coupling heat absorption, heat transport, and fluid flow) with a Lattice Boltzmann-cellular automata (LB-CA) microscale model of combined fluid flow, solute transport, and solidification. We apply this model to a binary Ti-5.5 wt pct W alloy and compare calculated quantities, such as dendrite arm spacing, with experimental results reported in a companion paper.

  16. Statistical virtual eye model based on wavefront aberration

    PubMed Central

    Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie

    2012-01-01

    Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112

  17. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    PubMed Central

    Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.

    2012-01-01

    Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912

  18. Rule-based modeling with Virtual Cell

    PubMed Central

    Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.

    2016-01-01

    Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444

  19. Enhanced performance of ultracapacitors using redox additive-based electrolytes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmendra; Kanungo, Jitendra; Tripathi, S. K.

    2018-05-01

    Different concentrations of potassium iodide (KI) as redox additive had been added to 1 M sulfuric acid (H2SO4) electrolyte with an aim of enhancing the capacitance and energy density of ultracapacitors via redox reactions at the interfaces of electrode-electrolyte. Ultracapacitors were fabricated using chemically treated activated carbon as electrode with H2SO4 and H2SO4-KI as an electrolyte. The electrochemical performances of fabricated supercapacitors were investigated by impedance spectroscopy, cyclic voltammetry and charge-discharge techniques. The maximum capacitance ` C' was observed with redox additives-based electrolyte system comprising 1 M H2SO4-0.3 M KI (1072 F g- 1), which is very much higher than conventional 1 M H2SO4 (61.3 F g- 1) aqueous electrolyte-based ultracapacitors. It corresponds to an energy density of 20.49 Wh kg- 1 at 2.1 A g- 1 for redox additive-based electrolyte, which is six times higher as compared to that of pristine electrolyte (1 M H2SO4) having energy density of only 3.36 Wh kg- 1. The temperature dependence behavior of fabricated cell was also analyzed, which shows increasing pattern in its capacitance values in a temperature range of 5-70 °C. Under cyclic stability test, redox electrolyte-based system shows almost 100% capacitance retention up to 5000 cycles and even more. For comparison, ultracapacitors based on polymer gel electrolyte polyvinyl alcohol (PVA) (10 wt%)—{H2SO4 (1 M)-KI (0.3 M)} (90 wt%) have been fabricated and characterized with the same electrode materials.

  20. Materials Testing and Cost Modeling for Composite Parts Through Additive Manufacturing

    DTIC Science & Technology

    2016-04-30

    FDM include plastic jet printing (PJP), fused filament modeling ( FFM ), and fused filament fabrication (FFF). FFF was coined by the RepRap project to...additive manufacturing processes? • Fused deposition modeling (FDM) trademarked by Stratasys • Fused filament modeling ( FFM ) and fused filament

  1. Flood loss model transfer: on the value of additional data

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno

    2017-04-01

    The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.

  2. Business model for sensor-based fall recognition systems.

    PubMed

    Fachinger, Uwe; Schöpke, Birte

    2014-01-01

    AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.

  3. A habitat suitability model for Chinese sturgeon determined using the generalized additive method

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong

    2016-03-01

    The Chinese sturgeon is a type of large anadromous fish that migrates between the ocean and rivers. Because of the construction of dams, this sturgeon's migration path has been cut off, and this species currently is on the verge of extinction. Simulating suitable environmental conditions for spawning followed by repairing or rebuilding its spawning grounds are effective ways to protect this species. Various habitat suitability models based on expert knowledge have been used to evaluate the suitability of spawning habitat. In this study, a two-dimensional hydraulic simulation is used to inform a habitat suitability model based on the generalized additive method (GAM). The GAM is based on real data. The values of water depth and velocity are calculated first via the hydrodynamic model and later applied in the GAM. The final habitat suitability model is validated using the catch per unit effort (CPUEd) data of 1999 and 2003. The model results show that a velocity of 1.06-1.56 m/s and a depth of 13.33-20.33 m are highly suitable ranges for the Chinese sturgeon to spawn. The hydraulic habitat suitability indexes (HHSI) for seven discharges (4000; 9000; 12,000; 16,000; 20,000; 30,000; and 40,000 m3/s) are calculated to evaluate integrated habitat suitability. The results show that the integrated habitat suitability reaches its highest value at a discharge of 16,000 m3/s. This study is the first to apply a GAM to evaluate the suitability of spawning grounds for the Chinese sturgeon. The study provides a reference for the identification of potential spawning grounds in the entire basin.

  4. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  5. Modeling the flux of metabolites in the juvenile hormone biosynthesis pathway using generalized additive models and ordinary differential equations.

    PubMed

    Martínez-Rincón, Raúl O; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G

    2017-01-01

    Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis.

  6. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be

  7. Modelling of additive manufacturing processes: a review and classification

    NASA Astrophysics Data System (ADS)

    Stavropoulos, Panagiotis; Foteinopoulos, Panagis

    2018-03-01

    Additive manufacturing (AM) is a very promising technology; however, there are a number of open issues related to the different AM processes. The literature on modelling the existing AM processes is reviewed and classified. A categorization of the different AM processes in process groups, according to the process mechanism, has been conducted and the most important issues are stated. Suggestions are made as to which approach is more appropriate according to the key performance indicator desired to be modelled and a discussion is included as to the way that future modelling work can better contribute to improving today's AM process understanding.

  8. Life prediction modeling based on cyclic damage accumulation

    NASA Technical Reports Server (NTRS)

    Nelson, Richard S.

    1988-01-01

    A high temperature, low cycle fatigue life prediction method was developed. This method, Cyclic Damage Accumulation (CDA), was developed for use in predicting the crack initiation lifetime of gas turbine engine materials, where initiation was defined as a 0.030 inch surface length crack. A principal engineering feature of the CDA method is the minimum data base required for implementation. Model constants can be evaluated through a few simple specimen tests such as monotonic loading and rapic cycle fatigue. The method was expanded to account for the effects on creep-fatigue life of complex loadings such as thermomechanical fatigue, hold periods, waveshapes, mean stresses, multiaxiality, cumulative damage, coatings, and environmental attack. A significant data base was generated on the behavior of the cast nickel-base superalloy B1900+Hf, including hundreds of specimen tests under such loading conditions. This information is being used to refine and extend the CDA life prediction model, which is now nearing completion. The model is also being verified using additional specimen tests on wrought INCO 718, and the final version of the model is expected to be adaptable to most any high-temperature alloy. The model is currently available in the form of equations and related constants. A proposed contract addition will make the model available in the near future in the form of a computer code to potential users.

  9. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houck, F.; Rosenthal, M.; Wulf, N.

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member Statesmore » and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.« less

  10. Base Stabilization Guidance and Additive Selection for Pavement Design and Rehabilitation

    DOT National Transportation Integrated Search

    2017-12-01

    Significant improvements have been made in base stabilization practice that include design specifications and methodology, experience with the selection of stabilizing additives, and equipment for distribution and uniform blending of additives. For t...

  11. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  12. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  13. A demonstrative model of a lunar base simulation on a personal computer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  14. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  15. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  16. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy

    PubMed Central

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-01-01

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework. PMID:28772504

  17. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy.

    PubMed

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-02-08

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework.

  18. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  19. A technology path to tactical agent-based modeling

    NASA Astrophysics Data System (ADS)

    James, Alex; Hanratty, Timothy P.

    2017-05-01

    Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

  20. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  1. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    USGS Publications Warehouse

    Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success

  2. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  3. Models for Delivering School-Based Dental Care.

    ERIC Educational Resources Information Center

    Albert, David A.; McManus, Joseph M.; Mitchell, Dennis A.

    2005-01-01

    School-based health centers (SBHCs) often are located in high-need schools and communities. Dental service is frequently an addition to existing comprehensive services, functioning in a variety of models, configurations, and locations. SBHCs are indicated when parents have limited financial resources or inadequate health insurance, limiting…

  4. Absence of dynamic strain aging in an additively manufactured nickel-base superalloy.

    PubMed

    Beese, Allison M; Wang, Zhuqing; Stoica, Alexandru D; Ma, Dong

    2018-05-25

    Dynamic strain aging (DSA), observed macroscopically as serrated plastic flow, has long been seen in nickel-base superalloys when plastically deformed at elevated temperatures. Here we report the absence of DSA in Inconel 625 made by additive manufacturing (AM) at temperatures and strain rates where DSA is present in its conventionally processed counterpart. This absence is attributed to the unique AM microstructure of finely dispersed secondary phases (carbides, N-rich phases, and Laves phase) and textured grains. Based on experimental observations, we propose a dislocation-arrest model to elucidate the criterion for DSA to occur or to be absent as a competition between dislocation pipe diffusion and carbide-carbon reactions. With in situ neutron diffraction studies of lattice strain evolution, our findings provide a new perspective for mesoscale understanding of dislocation-solute interactions and their impact on work-hardening behaviors in high-temperature alloys, and have important implications for tailoring thermomechanical properties by microstructure control via AM.

  5. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    PubMed

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  6. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  7. Colloidal-based additive manufacturing of bio-inspired composites

    NASA Astrophysics Data System (ADS)

    Studart, Andre R.

    Composite materials in nature exhibit heterogeneous architectures that are tuned to fulfill the functional demands of the surrounding environment. Examples range from the cellulose-based organic structure of plants to highly mineralized collagen-based skeletal parts like bone and teeth. Because they are often utilized to combine opposing properties such as strength and low-density or stiffness and wear resistance, the heterogeneous architecture of natural materials can potentially address several of the technical limitations of artificial homogeneous composites. However, current man-made manufacturing technologies do not allow for the level of composition and fiber orientation control found in natural heterogeneous systems. In this talk, I will present two additive manufacturing technologies recently developed in our group to build composites with exquisite architectures only rivaled by structures made by living organisms in nature. Since the proposed techniques utilize colloidal suspensions as feedstock, understanding the physics underlying the stability, assembly and rheology of the printing inks is key to predict and control the architecture of manufactured parts. Our results will show that additive manufacturing routes offer a new exciting pathway for the fabrication of biologically-inspired composite materials with unprecedented architectures and functionalities.

  8. PMMA denture base material enhancement: a review of fiber, filler, and nanofiller addition

    PubMed Central

    Gad, Mohammed M; Fouda, Shaimaa M; Al-Harbi, Fahad A; Näpänkangas, Ritva; Raustia, Aune

    2017-01-01

    This paper reviews acrylic denture base resin enhancement during the past few decades. Specific attention is given to the effect of fiber, filler, and nanofiller addition on poly(methyl methacrylate) (PMMA) properties. The review is based on scientific reviews, papers, and abstracts, as well as studies concerning the effect of additives, fibers, fillers, and reinforcement materials on PMMA, published between 1974 and 2016. Many studies have reported improvement of PMMA denture base material with the addition of fillers, fibers, nanofiller, and hybrid reinforcement. However, most of the studies were limited to in vitro investigations without bioactivity and clinical implications. Considering the findings of the review, there is no ideal denture base material, but the properties of PMMA could be improved with some modifications, especially with silanized nanoparticle addition and a hybrid reinforcement system. PMID:28553115

  9. A critical issue in model-based inference for studying trait-based community assembly and a solution.

    PubMed

    Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane

    2017-01-01

    Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait

  10. Predicting the occurrence of wildfires with binary structured additive regression models.

    PubMed

    Ríos-Pena, Laura; Kneib, Thomas; Cadarso-Suárez, Carmen; Marey-Pérez, Manuel

    2017-02-01

    Wildfires are one of the main environmental problems facing societies today, and in the case of Galicia (north-west Spain), they are the main cause of forest destruction. This paper used binary structured additive regression (STAR) for modelling the occurrence of wildfires in Galicia. Binary STAR models are a recent contribution to the classical logistic regression and binary generalized additive models. Their main advantage lies in their flexibility for modelling non-linear effects, while simultaneously incorporating spatial and temporal variables directly, thereby making it possible to reveal possible relationships among the variables considered. The results showed that the occurrence of wildfires depends on many covariates which display variable behaviour across space and time, and which largely determine the likelihood of ignition of a fire. The joint possibility of working on spatial scales with a resolution of 1 × 1 km cells and mapping predictions in a colour range makes STAR models a useful tool for plotting and predicting wildfire occurrence. Lastly, it will facilitate the development of fire behaviour models, which can be invaluable when it comes to drawing up fire-prevention and firefighting plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.

    PubMed

    Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y

    2018-06-07

    The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.

  12. Prevalence of Phosphorus-Based Additives in the Australian Food Supply: A Challenge for Dietary Education?

    PubMed

    McCutcheon, Jemma; Campbell, Katrina; Ferguson, Maree; Day, Sarah; Rossi, Megan

    2015-09-01

    Phosphorus-based food additives may pose a significant risk in chronic kidney disease given the link between hyperphosphatemia and cardiovascular disease. The objective of the study was to determine the prevalence of phosphorus-based food additives in best-selling processed grocery products and to establish how they were reported on food labels. A data set of 3000 best-selling grocery items in Australia across 15 food and beverage categories was obtained for the 12 months ending December 2013 produced by the Nielsen Company's Homescan database. The nutrition labels of the products were reviewed in store for phosphorus additives. The type of additive, total number of additives, and method of reporting (written out in words or as an E number) were recorded. Presence of phosphorus-based food additives, number of phosphorus-based food additives per product, and the reporting method of additives on product ingredient lists. Phosphorus-based additives were identified in 44% of food and beverages reviewed. Additives were particularly common in the categories of small goods (96%), bakery goods (93%), frozen meals (75%), prepared foods (70%), and biscuits (65%). A total of 19 different phosphorus additives were identified across the reviewed products. From the items containing phosphorus additives, there was a median (minimum-maximum) of 2 (1-7) additives per product. Additives by E number (81%) was the most common method of reporting. Phosphorus-based food additives are common in the Australian food supply. This suggests that prioritizing phosphorus additive education may be an important strategy in the dietary management of hyperphosphatemia. Further research to establish a database of food items containing phosphorus-based additives is warranted. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  13. Formation and reduction of carcinogenic furan in various model systems containing food additives.

    PubMed

    Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun

    2015-12-15

    The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  15. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex

    PubMed Central

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  16. Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.

    PubMed

    Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard

    2014-01-01

    To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.

  17. Additive manufacturing of three-dimensional (3D) microfluidic-based microelectromechanical systems (MEMS) for acoustofluidic applications.

    PubMed

    Cesewski, Ellen; Haring, Alexander P; Tong, Yuxin; Singh, Manjot; Thakur, Rajan; Laheri, Sahil; Read, Kaitlin A; Powell, Michael D; Oestreich, Kenneth J; Johnson, Blake N

    2018-06-13

    Three-dimensional (3D) printing now enables the fabrication of 3D structural electronics and microfluidics. Further, conventional subtractive manufacturing processes for microelectromechanical systems (MEMS) relatively limit device structure to two dimensions and require post-processing steps for interface with microfluidics. Thus, the objective of this work is to create an additive manufacturing approach for fabrication of 3D microfluidic-based MEMS devices that enables 3D configurations of electromechanical systems and simultaneous integration of microfluidics. Here, we demonstrate the ability to fabricate microfluidic-based acoustofluidic devices that contain orthogonal out-of-plane piezoelectric sensors and actuators using additive manufacturing. The devices were fabricated using a microextrusion 3D printing system that contained integrated pick-and-place functionality. Additively assembled materials and components included 3D printed epoxy, polydimethylsiloxane (PDMS), silver nanoparticles, and eutectic gallium-indium as well as robotically embedded piezoelectric chips (lead zirconate titanate (PZT)). Electrical impedance spectroscopy and finite element modeling studies showed the embedded PZT chips exhibited multiple resonant modes of varying mode shape over the 0-20 MHz frequency range. Flow visualization studies using neutrally buoyant particles (diameter = 0.8-70 μm) confirmed the 3D printed devices generated bulk acoustic waves (BAWs) capable of size-selective manipulation, trapping, and separation of suspended particles in droplets and microchannels. Flow visualization studies in a continuous flow format showed suspended particles could be moved toward or away from the walls of microfluidic channels based on selective actuation of in-plane or out-of-plane PZT chips. This work suggests additive manufacturing potentially provides new opportunities for the design and fabrication of acoustofluidic and microfluidic devices.

  18. Doubly Robust Additive Hazards Models to Estimate Effects of a Continuous Exposure on Survival.

    PubMed

    Wang, Yan; Lee, Mihye; Liu, Pengfei; Shi, Liuhua; Yu, Zhi; Abu Awad, Yara; Zanobetti, Antonella; Schwartz, Joel D

    2017-11-01

    The effect of an exposure on survival can be biased when the regression model is misspecified. Hazard difference is easier to use in risk assessment than hazard ratio and has a clearer interpretation in the assessment of effect modifications. We proposed two doubly robust additive hazards models to estimate the causal hazard difference of a continuous exposure on survival. The first model is an inverse probability-weighted additive hazards regression. The second model is an extension of the doubly robust estimator for binary exposures by categorizing the continuous exposure. We compared these with the marginal structural model and outcome regression with correct and incorrect model specifications using simulations. We applied doubly robust additive hazard models to the estimation of hazard difference of long-term exposure to PM2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 microns) on survival using a large cohort of 13 million older adults residing in seven states of the Southeastern United States. We showed that the proposed approaches are doubly robust. We found that each 1 μg m increase in annual PM2.5 exposure was associated with a causal hazard difference in mortality of 8.0 × 10 (95% confidence interval 7.4 × 10, 8.7 × 10), which was modified by age, medical history, socioeconomic status, and urbanicity. The overall hazard difference translates to approximately 5.5 (5.1, 6.0) thousand deaths per year in the study population. The proposed approaches improve the robustness of the additive hazards model and produce a novel additive causal estimate of PM2.5 on survival and several additive effect modifications, including social inequality.

  19. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  20. Comparison of GWAS models to identify non-additive genetic control of flowering time in sunflower hybrids.

    PubMed

    Bonnafous, Fanny; Fievet, Ghislain; Blanchet, Nicolas; Boniface, Marie-Claude; Carrère, Sébastien; Gouzy, Jérôme; Legrand, Ludovic; Marage, Gwenola; Bret-Mestries, Emmanuelle; Munos, Stéphane; Pouilly, Nicolas; Vincourt, Patrick; Langlade, Nicolas; Mangin, Brigitte

    2018-02-01

    This study compares five models of GWAS, to show the added value of non-additive modeling of allelic effects to identify genomic regions controlling flowering time of sunflower hybrids. Genome-wide association studies are a powerful and widely used tool to decipher the genetic control of complex traits. One of the main challenges for hybrid crops, such as maize or sunflower, is to model the hybrid vigor in the linear mixed models, considering the relatedness between individuals. Here, we compared two additive and three non-additive association models for their ability to identify genomic regions associated with flowering time in sunflower hybrids. A panel of 452 sunflower hybrids, corresponding to incomplete crossing between 36 male lines and 36 female lines, was phenotyped in five environments and genotyped for 2,204,423 SNPs. Intra-locus effects were estimated in multi-locus models to detect genomic regions associated with flowering time using the different models. Thirteen quantitative trait loci were identified in total, two with both model categories and one with only non-additive models. A quantitative trait loci on LG09, detected by both the additive and non-additive models, is located near a GAI homolog and is presented in detail. Overall, this study shows the added value of non-additive modeling of allelic effects for identifying genomic regions that control traits of interest and that could participate in the heterosis observed in hybrids.

  1. A standard protocol for describing individual-based and agent-based models

    USGS Publications Warehouse

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  2. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  3. An Additive to Improve the Wear Characteristics of Perfluoropolyether Based Greases

    NASA Technical Reports Server (NTRS)

    Jones, David G. V.; Fowzy, Mahmoud A.; Landry, James F.; Jones, William R., Jr.; Shogrin, Bradley A.; Nguyen, QuynhGiao

    1999-01-01

    The friction and wear characteristics of two formulated perfluoropolyether based greases were compared to their non-additive base greases. One grease was developed for the electronics industry (designated as GXL-296A) while the other is for space applications (designated as GXL-320A). The formulated greases (GXL-296B and GXL-320B) contained a proprietary antiwear additive at an optimized concentration. Tests were conducted using a vacuum four-ball tribometer. AISI 52100 steel specimens were used for all GXL-296 tests. Both AISI 52100 steel and 440C stainless steel were tested with the GXL-320 greases. Test conditions included: a pressure less than 6.7 x 10(exp )-4 Pa, a 200N load, a sliding velocity of 28.8 mm/sec (100 rpm) and room temperature (approximately equal to 23 C). Wear rates for each grease were determined from the slope of the wear volume as a function of sliding distance. Both non-additive base greases yielded relatively high wear rates on the order of 10(exp -8) cu mm using AISI 52100 steel specimens. Formulated grease GXL-296B yielded a reduction in wear rate by a factor of approximately 21, while grease GXL-320B had a reduction of approximately 12 times. Lower wear rates (-50%) were observed with both GXL-320 greases using 440C stainless steel. Mean friction coefficients were slightly higher for both formulated greases compared to their base greases. The GXL-296 series (higher base oil viscosity) yielded much higher friction coefficients compared to their GXL-320 series (lower base oil viscosity) counterparts.

  4. Identifying Multiple Levels of Discussion-Based Teaching Strategies for Constructing Scientific Models

    ERIC Educational Resources Information Center

    Williams, Grant; Clement, John

    2015-01-01

    This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…

  5. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    NASA Technical Reports Server (NTRS)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of

  6. High-Efficiency Small Molecule-Based Bulk-Heterojunction Solar Cells Enhanced by Additive Annealing.

    PubMed

    Li, Lisheng; Xiao, Liangang; Qin, Hongmei; Gao, Ke; Peng, Junbiao; Cao, Yong; Liu, Feng; Russell, Thomas P; Peng, Xiaobin

    2015-09-30

    Solvent additive processing is important in optimizing an active layer's morphology and thus improving the performance of organic solar cells (OSCs). In this study, we find that how 1,8-diiodooctane (DIO) additive is removed plays a critical role in determining the film morphology of the bulk heterojunction OSCs in inverted structure based on a porphyrin small molecule. Different from the cases reported for polymer-based OSCs in conventional structures, the inverted OSCs upon the quick removal of the additive either by quick vacuuming or methanol washing exhibit poorer performance. In contrast, the devices after keeping the active layers in ambient pressure with additive dwelling for about 1 h (namely, additive annealing) show an enhanced power conversion efficiency up to 7.78% with a large short circuit current of 19.25 mA/cm(2), which are among the best in small molecule-based solar cells. The detailed morphology analyses using UV-vis absorption spectroscopy, grazing incidence X-ray diffraction, resonant soft X-ray scattering, and atomic force microscopy demonstrate that the active layer shows smaller-sized phase separation but improved structure order upon additive annealing. On the contrary, the quick removal of the additive either by quick vacuuming or methanol washing keeps the active layers in an earlier stage of large scaled phase separation.

  7. Thermal Stability of Nanocrystalline Alloys by Solute Additions and A Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Saber, Mostafa

    and alpha → gamma phase transformation in Fe-Ni-Zr alloys. In addition to the experimental study of thermal stabilization of nanocrystalline Fe-Cr-Zr or Fe-Ni-Zr alloys, the thesis presented here developed a new predictive model, applicable to strongly segregating solutes, for thermodynamic stabilization of binary alloys. This model can serve as a benchmark for selecting solute and evaluating the possible contribution of stabilization. Following a regular solution model, both the chemical and elastic strain energy contributions are combined to obtain the mixing enthalpy. The total Gibbs free energy of mixing is then minimized with respect to simultaneous variations in the grain boundary volume fraction and the solute concentration in the grain boundary and the grain interior. The Lagrange multiplier method was used to obtained numerical solutions. Application are given for the temperature dependence of the grain size and the grain boundary solute excess for selected binary system where experimental results imply that thermodynamic stabilization could be operative. This thesis also extends the binary model to a new model for thermodynamic stabilization of ternary nanocrystalline alloys. It is applicable to strongly segregating size-misfit solutes and uses input data available in the literature. In a same manner as the binary model, this model is based on a regular solution approach such that the chemical and elastic strain energy contributions are incorporated into the mixing enthalpy DeltaHmix, and the mixing entropy DeltaSmix is obtained using the ideal solution approximation. The Gibbs mixing free energy Delta Gmix is then minimized with respect to simultaneous variations in grain growth and solute segregation parameters. The Lagrange multiplier method is similarly used to obtain numerical solutions for the minimum Delta Gmix. The temperature dependence of the nanocrystalline grain size and interfacial solute excess can be obtained for selected ternary systems. As

  8. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how

  9. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  10. A Single-Boundary Accumulator Model of Response Times in an Addition Verification Task

    PubMed Central

    Faulkenberry, Thomas J.

    2017-01-01

    Current theories of mathematical cognition offer competing accounts of the interplay between encoding and calculation in mental arithmetic. Additive models propose that manipulations of problem format do not interact with the cognitive processes used in calculation. Alternatively, interactive models suppose that format manipulations have a direct effect on calculation processes. In the present study, we tested these competing models by fitting participants' RT distributions in an arithmetic verification task with a single-boundary accumulator model (the shifted Wald distribution). We found that in addition to providing a more complete description of RT distributions, the accumulator model afforded a potentially more sensitive test of format effects. Specifically, we found that format affected drift rate, which implies that problem format has a direct impact on calculation processes. These data give further support for an interactive model of mental arithmetic. PMID:28769853

  11. Spontaneous collective synchronization in the Kuramoto model with additional non-local interactions

    NASA Astrophysics Data System (ADS)

    Gupta, Shamik

    2017-10-01

    In the context of the celebrated Kuramoto model of globally-coupled phase oscillators of distributed natural frequencies, which serves as a paradigm to investigate spontaneous collective synchronization in many-body interacting systems, we report on a very rich phase diagram in presence of thermal noise and an additional non-local interaction on a one-dimensional periodic lattice. Remarkably, the phase diagram involves both equilibrium and non-equilibrium phase transitions. In two contrasting limits of the dynamics, we obtain exact analytical results for the phase transitions. These two limits correspond to (i) the absence of thermal noise, when the dynamics reduces to that of a non-linear dynamical system, and (ii) the oscillators having the same natural frequency, when the dynamics becomes that of a statistical system in contact with a heat bath and relaxing to a statistical equilibrium state. In the former case, our exact analysis is based on the use of the so-called Ott-Antonsen ansatz to derive a reduced set of nonlinear partial differential equations for the macroscopic evolution of the system. Our results for the case of statistical equilibrium are on the other hand obtained by extending the well-known transfer matrix approach for nearest-neighbor Ising model to consider non-local interactions. The work offers a case study of exact analysis in many-body interacting systems. The results obtained underline the crucial role of additional non-local interactions in either destroying or enhancing the possibility of observing synchrony in mean-field systems exhibiting spontaneous synchronization.

  12. Characterization of Ti and Co based biomaterials processed via laser based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Sahasrabudhe, Himanshu

    Titanium and Cobalt based metallic materials are currently the most ideal materials for load-bearing metallic bio medical applications. However, the long term tribological degradation of these materials still remains a problem that needs a solution. To improve the tribological performance of these two metallic systems, three different research approaches were adapted, stemming out four different research projects. First, the simplicity of laser gas nitriding was utilized with a modern LENS(TM) technology to form an in situ nitride rich later in titanium substrate material. This nitride rich composite coating improved the hardness by as much as fifteen times and reduced the wear rate by more than a magnitude. The leaching of metallic ions during wear was also reduced by four times. In the second research project, a mixture of titanium and silicon were processed on a titanium substrate in a nitrogen rich environment. The results of this reactive, in situ additive manufacturing process were Ti-Si-Nitride coatings that were harder than the titanium substrate by more than twenty times. These coatings also reduced the wear rate by more than two magnitudes. In the third research approach, composites of CoCrMo alloy and Calcium phosphate (CaP) bio ceramic were processed using LENS(TM) based additive manufacturing. These composites were effective in reducing the wear in the CoCrMo alloy by more than three times as well as reduce the leaching of cobalt and chromium ions during wear. The novel composite materials were found to develop a tribofilm during wear. In the final project, a combination of hard nitride coating and addition of CaP bioceramic was investigated by processing a mixture of Ti6Al4V alloy and CaP in a nitrogen rich environment using the LENS(TM) technology. The resultant Ti64-CaP-Nitride coatings significantly reduced the wear damage on the substrate. There was also a drastic reduction in the metal ions leached during wear. The results indicate that the three

  13. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  14. Individualized Additional Instruction for Calculus

    ERIC Educational Resources Information Center

    Takata, Ken

    2010-01-01

    College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the…

  15. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.

    PubMed

    McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey

    2012-04-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  16. A MIXTURE OF SEVEN ANTIANDROGENIC COMPOUNDS ELICITS ADDITIVE EFFECTS ON THE MALE RAT REPRODUCTIVE TRACT THAT CORRESPOND TO MODELED PREDICTIONS

    EPA Science Inventory

    The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...

  17. Evolution of solidification texture during additive manufacturing

    PubMed Central

    Wei, H. L.; Mazumder, J.; DebRoy, T.

    2015-01-01

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Therefore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numerical modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components. PMID:26553246

  18. Evolution of solidification texture during additive manufacturing.

    PubMed

    Wei, H L; Mazumder, J; DebRoy, T

    2015-11-10

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Therefore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numerical modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components.

  19. Evolution of solidification texture during additive manufacturing

    DOE PAGES

    Wei, H. L.; Mazumder, J.; DebRoy, T.

    2015-11-10

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six <100> preferred growth directions in face centered cubic alloys. Furthermore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numericalmore » modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components.« less

  20. Mechanistic Kinetic Modeling of Thiol-Michael Addition Photopolymerizations via Photocaged "Superbase" Generators: An Analytical Approach.

    PubMed

    Claudino, Mauro; Zhang, Xinpeng; Alim, Marvin D; Podgórski, Maciej; Bowman, Christopher N

    2016-11-08

    A kinetic mechanism and the accompanying mathematical framework are presented for base-mediated thiol-Michael photopolymerization kinetics involving a photobase generator. Here, model kinetic predictions demonstrate excellent agreement with a representative experimental system composed of 2-(2-nitrophenyl)propyloxycarbonyl-1,1,3,3-tetramethylguanidine (NPPOC-TMG) as a photobase generator that is used to initiate thiol-vinyl sulfone Michael addition reactions and polymerizations. Modeling equations derived from a basic mechanistic scheme indicate overall polymerization rates that follow a pseudo-first-order kinetic process in the base and coreactant concentrations, controlled by the ratio of the propagation to chain-transfer kinetic parameters ( k p / k CT ) which is dictated by the rate-limiting step and controls the time necessary to reach gelation. Gelation occurs earlier as the k p / k CT ratio reaches a critical value, wherefrom gel times become nearly independent of k p / k CT . The theoretical approach allowed determining the effect of induction time on the reaction kinetics due to initial acid-base neutralization for the photogenerated base caused by the presence of protic contaminants. Such inhibition kinetics may be challenging for reaction systems that require high curing rates but are relevant for chemical systems that need to remain kinetically dormant until activated although at the ultimate cost of lower polymerization rates. The pure step-growth character of this living polymerization and the exhibited kinetics provide unique potential for extended dark-cure reactions and uniform material properties. The general kinetic model is applicable to photobase initiators where photolysis follows a unimolecular cleavage process releasing a strong base catalyst without cogeneration of intermediate radical species.

  1. The Effect of Copper Addition on the Activity and Stability of Iron-Based CO₂ Hydrogenation Catalysts.

    PubMed

    Bradley, Matthew J; Ananth, Ramagopal; Willauer, Heather D; Baldwin, Jeffrey W; Hardy, Dennis R; Williams, Frederick W

    2017-09-20

    Iron-based CO₂ catalysts have shown promise as a viable route to the production of olefins from CO₂ and H₂ gas. However, these catalysts can suffer from low conversion and high methane selectivity, as well as being particularly vulnerable to water produced during the reaction. In an effort to improve both the activity and durability of iron-based catalysts on an alumina support, copper (10-30%) has been added to the catalyst matrix. In this paper, the effects of copper addition on the catalyst activity and morphology are examined. The addition of 10% copper significantly increases the CO₂ conversion, and decreases methane and carbon monoxide selectivity, without significantly altering the crystallinity and structure of the catalyst itself. The FeCu/K catalysts form an inverse spinel crystal phase that is independent of copper content and a metallic phase that increases in abundance with copper loading (>10% Cu). At higher loadings, copper separates from the iron oxide phase and produces metallic copper as shown by SEM-EDS. An addition of copper appears to increase the rate of the Fischer-Tropsch reaction step, as shown by modeling of the chemical kinetics and the inter- and intra-particle transport of mass and energy.

  2. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  3. Enlarged leukocyte referent libraries can explain additional variance in blood-based epigenome-wide association studies.

    PubMed

    Kim, Stephanie; Eliot, Melissa; Koestler, Devin C; Houseman, Eugene A; Wetmur, James G; Wiencke, John K; Kelsey, Karl T

    2016-09-01

    We examined whether variation in blood-based epigenome-wide association studies could be more completely explained by augmenting existing reference DNA methylation libraries. We compared existing and enhanced libraries in predicting variability in three publicly available 450K methylation datasets that collected whole-blood samples. Models were fit separately to each CpG site and used to estimate the additional variability when adjustments for cell composition were made with each library. Calculation of the mean difference in the CpG-specific residual sums of squares error between models for an arthritis, aging and metabolic syndrome dataset, indicated that an enhanced library explained significantly more variation across all three datasets (p < 10(-3)). Pathologically important immune cell subtypes can explain important variability in epigenome-wide association studies done in blood.

  4. Dynamic modeling for rigid rotor bearing systems with a localized defect considering additional deformations at the sharp edges

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Shao, Yimin

    2017-06-01

    Rotor bearing systems (RBSs) play a very valuable role for wind turbine gearboxes, aero-engines, high speed spindles, and other rotational machinery. An in-depth understanding of vibrations of the RBSs is very useful for condition monitoring and diagnosis applications of these machines. A new twelve-degree-of-freedom dynamic model for rigid RBSs with a localized defect (LOD) is proposed. This model can formulate the housing support stiffness, interfacial frictional moments including load dependent and load independent components, time-varying displacement excitation caused by a LOD, additional deformations at the sharp edges of the LOD, and lubricating oil film. The time-varying displacement model is determined by a half-sine function. A new method for calculating the additional deformations at the sharp edges of the LOD is analytical derived based on an elastic quarter-space method presented in the literature. The proposed dynamic model is utilized to analyze the influences of the housing support stiffness and LOD sizes on the vibration characteristics of the rigid RBS, which cannot be predicted by the previous dynamic models in the literature. The results show that the presented method can give a new dynamic modeling method for vibration formulation for a rigid RBS with and without the LOD on the races.

  5. [Analysis of constituents of ester-type gum bases used as natural food additives].

    PubMed

    Tada, Atsuko; Masuda, Aino; Sugimoto, Naoki; Yamagata, Kazuo; Yamazaki, Takeshi; Tanamoto, Kenichi

    2007-12-01

    The differences in the constituents of ten ester-type gum bases used as natural food additives in Japan (urushi wax, carnauba wax, candelilla wax, rice bran wax, shellac wax, jojoba wax, bees wax, Japan wax, montan wax, and lanolin) were investigated. Several kinds of gum bases showed characteristic TLC patterns of lipids. In addition, compositions of fatty acid and alcohol moieties of esters in the gum bases were analyzed by GC/MS after methanolysis and hydrolysis, respectively. The results indicated that the varieties of fatty acids and alcohols and their compositions were characteristic for each gum base. These results will be useful for identification and discrimination of the ester-type gum bases.

  6. Model-based clustering for RNA-seq data.

    PubMed

    Si, Yaqing; Liu, Peng; Li, Pinghua; Brutnell, Thomas P

    2014-01-15

    RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org

  7. 7 CFR 985.153 - Issuance of additional allotment base to new and existing producers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Administrative Rules and Regulations § 985.153 Issuance of additional allotment base to new and existing... 7 Agriculture 8 2010-01-01 2010-01-01 false Issuance of additional allotment base to new and... (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts...

  8. Antimicrobial combinations: Bliss independence and Loewe additivity derived from mechanistic multi-hit models

    PubMed Central

    Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens

    2016-01-01

    Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials. This article is part of the themed issue ‘Evolutionary ecology of arthropod antimicrobial peptides’. PMID:27160596

  9. Group-Based Active Learning of Classification Models.

    PubMed

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  10. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  11. First- and Second-Line Bevacizumab in Addition to Chemotherapy for Metastatic Colorectal Cancer: A United States–Based Cost-Effectiveness Analysis

    PubMed Central

    Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.

    2015-01-01

    Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669

  12. Modeling of driver's collision avoidance maneuver based on controller switching model.

    PubMed

    Kim, Jong-Hae; Hayakawa, Soichiro; Suzuki, Tatsuya; Hayashi, Koji; Okuma, Shigeru; Tsuchida, Nuio; Shimizu, Masayuki; Kido, Shigeyuki

    2005-12-01

    This paper presents a modeling strategy of human driving behavior based on the controller switching model focusing on the driver's collision avoidance maneuver. The driving data are collected by using the three-dimensional (3-D) driving simulator based on the CAVE Automatic Virtual Environment (CAVE), which provides stereoscopic immersive virtual environment. In our modeling, the control scenario of the human driver, that is, the mapping from the driver's sensory information to the operation of the driver such as acceleration, braking, and steering, is expressed by Piecewise Polynomial (PWP) model. Since the PWP model includes both continuous behaviors given by polynomials and discrete logical conditions, it can be regarded as a class of Hybrid Dynamical System (HDS). The identification problem for the PWP model is formulated as the Mixed Integer Linear Programming (MILP) by transforming the switching conditions into binary variables. From the obtained results, it is found that the driver appropriately switches the "control law" according to the sensory information. In addition, the driving characteristics of the beginner driver and the expert driver are compared and discussed. These results enable us to capture not only the physical meaning of the driving skill but the decision-making aspect (switching conditions) in the driver's collision avoidance maneuver as well.

  13. Geometric Modeling of Cellular Materials for Additive Manufacturing in Biomedical Field: A Review.

    PubMed

    Savio, Gianpaolo; Rosso, Stefano; Meneghello, Roberto; Concheri, Gianmaria

    2018-01-01

    Advances in additive manufacturing technologies facilitate the fabrication of cellular materials that have tailored functional characteristics. The application of solid freeform fabrication techniques is especially exploited in designing scaffolds for tissue engineering. In this review, firstly, a classification of cellular materials from a geometric point of view is proposed; then, the main approaches on geometric modeling of cellular materials are discussed. Finally, an investigation on porous scaffolds fabricated by additive manufacturing technologies is pointed out. Perspectives in geometric modeling of scaffolds for tissue engineering are also proposed.

  14. Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.

    2018-01-01

    The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.

  15. 78 FR 54758 - Listing of Color Additives Exempt From Certification; Mica-Based Pearlescent Pigments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket No. FDA-2012-C-0224] Listing of Color Additives Exempt From Certification; Mica-Based Pearlescent Pigments...). The final rule amended the color additive regulations to provide for the safe use of mica-based...

  16. Modular Architecture for Integrated Model-Based Decision Support.

    PubMed

    Gaebel, Jan; Schreiber, Erik; Oeser, Alexander; Oeltze-Jafra, Steffen

    2018-01-01

    Model-based decision support systems promise to be a valuable addition to oncological treatments and the implementation of personalized therapies. For the integration and sharing of decision models, the involved systems must be able to communicate with each other. In this paper, we propose a modularized architecture of dedicated systems for the integration of probabilistic decision models into existing hospital environments. These systems interconnect via web services and provide model sharing and processing capabilities for clinical information systems. Along the lines of IHE integration profiles from other disciplines and the meaningful reuse of routinely recorded patient data, our approach aims for the seamless integration of decision models into hospital infrastructure and the physicians' daily work.

  17. ANNIE - INTERACTIVE PROCESSING OF DATA BASES FOR HYDROLOGIC MODELS.

    USGS Publications Warehouse

    Lumb, Alan M.; Kittle, John L.

    1985-01-01

    ANNIE is a data storage and retrieval system that was developed to reduce the time and effort required to calibrate, verify, and apply watershed models that continuously simulate water quantity and quality. Watershed models have three categories of input: parameters to describe segments of a drainage area, linkage of the segments, and time-series data. Additional goals for ANNIE include the development of software that is easily implemented on minicomputers and some microcomputers and software that has no special requirements for interactive display terminals. Another goal is for the user interaction to be based on the experience of the user so that ANNIE is helpful to the inexperienced user and yet efficient and brief for the experienced user. Finally, the code should be designed so that additional hydrologic models can easily be added to ANNIE.

  18. High efficiency iron electrode and additives for use in rechargeable iron-based batteries

    DOEpatents

    Narayan, Sri R.; Prakash, G. K. Surya; Aniszfeld, Robert; Manohar, Aswin; Malkhandi, Souradip; Yang, Bo

    2017-02-21

    An iron electrode and a method of manufacturing an iron electrode for use in an iron-based rechargeable battery are disclosed. In one embodiment, the iron electrode includes carbonyl iron powder and one of a metal sulfide additive or metal oxide additive selected from the group of metals consisting of bismuth, lead, mercury, indium, gallium, and tin for suppressing hydrogen evolution at the iron electrode during charging of the iron-based rechargeable battery. An iron-air rechargeable battery including an iron electrode comprising carbonyl iron is also disclosed, as is an iron-air battery wherein at least one of the iron electrode and the electrolyte includes an organosulfur additive.

  19. Geometric Modeling of Cellular Materials for Additive Manufacturing in Biomedical Field: A Review

    PubMed Central

    Rosso, Stefano; Meneghello, Roberto; Concheri, Gianmaria

    2018-01-01

    Advances in additive manufacturing technologies facilitate the fabrication of cellular materials that have tailored functional characteristics. The application of solid freeform fabrication techniques is especially exploited in designing scaffolds for tissue engineering. In this review, firstly, a classification of cellular materials from a geometric point of view is proposed; then, the main approaches on geometric modeling of cellular materials are discussed. Finally, an investigation on porous scaffolds fabricated by additive manufacturing technologies is pointed out. Perspectives in geometric modeling of scaffolds for tissue engineering are also proposed. PMID:29487626

  20. Knowledge representation to support reasoning based on multiple models

    NASA Technical Reports Server (NTRS)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  1. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    PubMed

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  2. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology

    PubMed Central

    Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.

    2014-01-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914

  3. Study of abrasive resistance of foundries models obtained with use of additive technology

    NASA Astrophysics Data System (ADS)

    Ol'khovik, Evgeniy

    2017-10-01

    A problem of determination of resistance of the foundry models and patterns from ABS (PLA) plastic, obtained by the method of 3D printing with using FDM additive technology, to abrasive wear and resistance in the environment of foundry sand mould is considered in the present study. The description of a technique and equipment for tests of castings models and patterns for wear is provided in the article. The manufacturing techniques of models with the use of the 3D printer (additive technology) are described. The scheme with vibration load was applied to samples tests. For the most qualitative research of influence of sandy mix on plastic, models in real conditions of abrasive wear have been organized. The results also examined the application of acrylic paintwork to the plastic model and a two-component coating. The practical offers and recommendation on production of master models with the use of FDM technology allowing one to reach indicators of durability, exceeding 2000 cycles of moulding in foundry sand mix, are described.

  4. SLS Model Based Design: A Navigation Perspective

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  5. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  6. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants.

    PubMed

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-11-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier-Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. © 2014 Wiley Periodicals, Inc.

  7. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants

    PubMed Central

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-01-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier–Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. PMID:24664988

  8. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  9. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    EPA Pesticide Factsheets

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  10. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  11. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  12. Wood lens design philosophy based on a binary additive manufacturing technique

    NASA Astrophysics Data System (ADS)

    Marasco, Peter L.; Bailey, Christopher

    2016-04-01

    Using additive manufacturing techniques in optical engineering to construct a gradient index (GRIN) optic may overcome a number of limitations of GRIN technology. Such techniques are maturing quickly, yielding additional design degrees of freedom for the engineer. How best to employ these degrees of freedom is not completely clear at this time. This paper describes a preliminary design philosophy, including assumptions, pertaining to a particular printing technique for GRIN optics. It includes an analysis based on simulation and initial component measurement.

  13. Using 3D Printing (Additive Manufacturing) to Produce Low-Cost Simulation Models for Medical Training.

    PubMed

    Lichtenberger, John P; Tatum, Peter S; Gada, Satyen; Wyn, Mark; Ho, Vincent B; Liacouras, Peter

    2018-03-01

    This work describes customized, task-specific simulation models derived from 3D printing in clinical settings and medical professional training programs. Simulation models/task trainers have an array of purposes and desired achievements for the trainee, defining that these are the first step in the production process. After this purpose is defined, computer-aided design and 3D printing (additive manufacturing) are used to create a customized anatomical model. Simulation models then undergo initial in-house testing by medical specialists followed by a larger scale beta testing. Feedback is acquired, via surveys, to validate effectiveness and to guide or determine if any future modifications and/or improvements are necessary. Numerous custom simulation models have been successfully completed with resulting task trainers designed for procedures, including removal of ocular foreign bodies, ultrasound-guided joint injections, nerve block injections, and various suturing and reconstruction procedures. These task trainers have been frequently utilized in the delivery of simulation-based training with increasing demand. 3D printing has been integral to the production of limited-quantity, low-cost simulation models across a variety of medical specialties. In general, production cost is a small fraction of a commercial, generic simulation model, if available. These simulation and training models are customized to the educational need and serve an integral role in the education of our military health professionals.

  14. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  15. The economics of a pharmacy-based central intravenous additive service for paediatric patients.

    PubMed

    Armour, D J; Cairns, C J; Costello, I; Riley, S J; Davies, E G

    1996-10-01

    This study was designed to compare the costs of a pharmacy-based Central Intravenous Additive Service (CIVAS) with those of traditional ward-based preparation of intravenous doses for a paediatric population. Labour costs were derived from timings of preparation of individual doses in both the pharmacy and ward by an independent observer. The use of disposables and diluents was recorded and their acquisition costs apportioned to the cost of each dose prepared. Data were collected from 20 CIVAS sessions (501 doses) and 26 ward-based sessions (30 doses). In addition, the costs avoided by the use of part vials in CIVAS was calculated. This was derived from a total of 50 CIVAS sessions. Labour, disposable and diluent costs were significantly lower for CIVAS compared with ward-based preparation (p < 0.001). The ratio of costs per dose [in 1994 pounds sterling] between ward and pharmacy was 2.35:1 (2.51 pounds:1.07 pounds). Sensitivity analysis of the best and worst staff mixes in both locations ranged from 2.3:1 to 4.0:1, always in favour of CIVAS. There were considerable costs avoided in CIVAS from the multiple use of vials; the estimated annual sum derived from the study was 44,000 pounds. In addition, CIVAS was less vulnerable to unanticipated interruptions in work flow than ward-based preparation. CIVAS for children was more economical than traditional ward-based preparation, because of a cost-minimisation effect. Sensitivity analysis showed that these advantages were maintained over a full range of skill mixes. Additionally, significant savings accrued from the multiple use of vials in CIVAS.

  16. Boosting the Performance of Ionic-Liquid-Based Supercapacitors with Polar Additives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Kun; Wu, Jianzhong

    Recent years have witnessed growing interests in both the fundamentals and applications of electric double layer capacitors (EDLCs), also known as supercapacitors. A number of strategies have been explored to optimize the device performance in terms of both the energy and power densities. Because the properties of electric double layers (EDL) are sensitive to ion distributions in the close vicinity of the electrode surfaces, the supercapacitor performance is sensitive to both the electrode pore structure and the electrolyte composition. In this paper, we study the effects of polar additives on EDLC capacitance using the classical density functional theory within themore » framework of a coarse-grained model for the microscopic structure of the porous electrodes and room-temperature ionic liquids. The theoretical results indicate that a highly polar, low-molecular-weight additive is able to drastically increase the EDLC capacitance at low bulk concentration. Additionally, the additive is able to dampen the oscillatory dependence of the capacitance on the pore size, thereby boosting the performance of amorphous electrode materials. Finally, the theoretical predictions are directly testable with experiments and provide new insights into the additive effects on EDL properties.« less

  17. A physiologically based model for tramadol pharmacokinetics in horses.

    PubMed

    Abbiati, Roberto Andrea; Cagnardi, Petra; Ravasio, Giuliano; Villa, Roberto; Manca, Davide

    2017-09-21

    This work proposes an application of a minimal complexity physiologically based pharmacokinetic model to predict tramadol concentration vs time profiles in horses. Tramadol is an opioid analgesic also used for veterinary treatments. Researchers and medical doctors can profit from the application of mathematical models as supporting tools to optimize the pharmacological treatment of animal species. The proposed model is based on physiology but adopts the minimal compartmental architecture necessary to describe the experimental data. The model features a system of ordinary differential equations, where most of the model parameters are either assigned or individualized for a given horse, using literature data and correlations. Conversely, residual parameters, whose value is unknown, are regressed exploiting experimental data. The model proved capable of simulating pharmacokinetic profiles with accuracy. In addition, it provides further insights on un-observable tramadol data, as for instance tramadol concentration in the liver or hepatic metabolism and renal excretion extent. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Ordered Materials via Additive Driven Assembly and Reaction using Surfactant-Based Templates

    NASA Astrophysics Data System (ADS)

    Beaulieu, Michael R.; Daga, Vikram K.; Lesser, Alan J.; Watkins, James J.

    2011-03-01

    We recently reported (1) the ordering behavior of Pluronic surfactant melts through the addition of aromatic additives with hydrogen bond donating groups, which exhibit selective interactions with the polyethylene oxide (PEO) block. The ordered blends had domain sizes ranging from 12 to 16 nm at additive loadings up to 80%.The goal of this work is to utilize condensation chemistries based on the functionality of similar additives, to yield ordered composite materials that could be used for applications involving membranes or dielectric materials. The structure of the blends and composites are determined by small angle x-ray scattering, which indicates that the ordered structure is preserved following reaction of the additives. Differential scanning calorimetry indicates that an increase in additive loading causes a decrease in the melting temperature and enthalpy of melting of the PEO, which demonstrates that the interaction between the PEO segments and the additive is strong. (1) Daga, V.K., Watkins, J. J. Macromolecules, ASAP.

  19. Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame

    NASA Astrophysics Data System (ADS)

    Lee, Myoungkyu; Oliver, Todd; Moser, Robert

    2017-11-01

    Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.

  20. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  1. Compartmental and Spatial Rule-Based Modeling with Virtual Cell.

    PubMed

    Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M

    2017-10-03

    In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Improving Conceptual Understanding and Representation Skills Through Excel-Based Modeling

    NASA Astrophysics Data System (ADS)

    Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.

    2018-02-01

    The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental design study to test the effectiveness of a model-based curriculum focused on the concepts of natural selection and population ecology that makes use of Excel modeling tools (Modeling Instruction in Biology with Excel, MBI-E). The curriculum revolves around the bio-engineering practice of controlling an invasive species. The study takes place in the Midwest within ten high schools teaching a regular-level introductory biology class. A post-test was designed that targeted a number of common misconceptions in both concept areas as well as representational usage. The results of a post-test demonstrate that the MBI-E students significantly outperformed the traditional classes in both natural selection and population ecology concepts, thus overcoming a number of misconceptions. In addition, implementing students made use of more multiple representations as well as demonstrating greater fascination for science.

  3. The use of multiple models in case-based diagnosis

    NASA Technical Reports Server (NTRS)

    Karamouzis, Stamos T.; Feyock, Stefan

    1993-01-01

    The work described in this paper has as its goal the integration of a number of reasoning techniques into a unified intelligent information system that will aid flight crews with malfunction diagnosis and prognostication. One of these approaches involves using the extensive archive of information contained in aircraft accident reports along with various models of the aircraft as the basis for case-based reasoning about malfunctions. Case-based reasoning draws conclusions on the basis of similarities between the present situation and prior experience. We maintain that the ability of a CBR program to reason about physical systems is significantly enhanced by the addition to the CBR program of various models. This paper describes the diagnostic concepts implemented in a prototypical case based reasoner that operates in the domain of in-flight fault diagnosis, the various models used in conjunction with the reasoner's CBR component, and results from a preliminary evaluation.

  4. Simulation based optimized beam velocity in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François

    2017-08-01

    Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.

  5. Model-based segmentation of hand radiographs

    NASA Astrophysics Data System (ADS)

    Weiler, Frank; Vogelsang, Frank

    1998-06-01

    An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.

  6. ADPROCLUS: a graphical user interface for fitting additive profile clustering models to object by variable data matrices.

    PubMed

    Wilderjans, Tom F; Ceulemans, Eva; Van Mechelen, Iven; Depril, Dirk

    2011-03-01

    In many areas of psychology, one is interested in disclosing the underlying structural mechanisms that generated an object by variable data set. Often, based on theoretical or empirical arguments, it may be expected that these underlying mechanisms imply that the objects are grouped into clusters that are allowed to overlap (i.e., an object may belong to more than one cluster). In such cases, analyzing the data with Mirkin's additive profile clustering model may be appropriate. In this model: (1) each object may belong to no, one or several clusters, (2) there is a specific variable profile associated with each cluster, and (3) the scores of the objects on the variables can be reconstructed by adding the cluster-specific variable profiles of the clusters the object in question belongs to. Until now, however, no software program has been publicly available to perform an additive profile clustering analysis. For this purpose, in this article, the ADPROCLUS program, steered by a graphical user interface, is presented. We further illustrate its use by means of the analysis of a patient by symptom data matrix.

  7. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    NASA Astrophysics Data System (ADS)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  8. Crowd evacuation model based on bacterial foraging algorithm

    NASA Astrophysics Data System (ADS)

    Shibiao, Mu; Zhijun, Chen

    To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.

  9. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  10. Impact of Phosphorus-Based Food Additives on Bone and Mineral Metabolism.

    PubMed

    Gutiérrez, Orlando M; Luzuriaga-McPherson, Alexandra; Lin, Yiming; Gilbert, Linda C; Ha, Shin-Woo; Beck, George R

    2015-11-01

    Phosphorus-based food additives can substantially increase total phosphorus intake per day, but the effect of these additives on endocrine factors regulating bone and mineral metabolism is unclear. This study aimed to examine the effect of phosphorus additives on markers of bone and mineral metabolism. Design and Setting, and Participants: This was a feeding study of 10 healthy individuals fed a diet providing ∼1000 mg of phosphorus/d using foods known to be free of phosphorus additives for 1 week (low-additive diet), immediately followed by a diet containing identical food items; however, the foods contained phosphorus additives (additive-enhanced diet). Parallel studies were conducted in animals fed low- (0.2%) and high- (1.8%) phosphorus diets for 5 or 15 weeks. The changes in markers of mineral metabolism after each diet period were measured. Participants were 32 ± 8 years old, 30% male, and 70% black. The measured phosphorus content of the additive-enhanced diet was 606 ± 125 mg higher than the low-additive diet (P < .001). After 1 week of the low-additive diet, consuming the additive-enhanced diet for 1 week significantly increased circulating fibroblast growth factor 23 (FGF23), osteopontin, and osteocalcin concentrations by 23, 10, and 11%, respectively, and decreased mean sclerostin concentrations (P < .05 for all). Similarly, high-phosphorus diets in mice significantly increased blood FGF23, osteopontin and osteocalcin, lowered sclerostin, and decreased bone mineral density (P < .05 for all). The enhanced phosphorus content of processed foods can disturb bone and mineral metabolism in humans. The results of the animal studies suggest that this may compromise bone health.

  11. Low-cost Electromagnetic Heating Technology for Polymer Extrusion-based Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, William G.; Rios, Orlando; Akers, Ronald R.

    To improve the flow of materials used in in polymer additive manufacturing, ORNL and Ajax Tocco created an induction system for heating fused deposition modeling (FDM) nozzles used in polymer additive manufacturing. The system is capable of reaching a temperature of 230 C, a typical nozzle temperature for extruding ABS polymers, in 17 seconds. A prototype system was built at ORNL and sent to Ajax Tocco who analyzed the system and created a finalized power supply. The induction system was mounted to a PrintSpace Altair desktop printer and used to create several test parts similar in quality to those createdmore » using a resistive heated nozzle.« less

  12. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  13. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  14. Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models

    NASA Astrophysics Data System (ADS)

    Southworth, John

    2010-11-01

    I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the

  15. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based

  16. Characterization of metal additive manufacturing surfaces using synchrotron X-ray CT and micromechanical modeling

    NASA Astrophysics Data System (ADS)

    Kantzos, C. A.; Cunningham, R. W.; Tari, V.; Rollett, A. D.

    2018-05-01

    Characterizing complex surface topologies is necessary to understand stress concentrations created by rough surfaces, particularly those made via laser power-bed additive manufacturing (AM). Synchrotron-based X-ray microtomography (μ XCT) of AM surfaces was shown to provide high resolution detail of surface features and near-surface porosity. Using the CT reconstructions to instantiate a micromechanical model indicated that surface notches and near-surface porosity both act as stress concentrators, while adhered powder carried little to no load. Differences in powder size distribution had no direct effect on the relevant surface features, nor on stress concentrations. Conventional measurements of surface roughness, which are highly influenced by adhered powder, are therefore unlikely to contain the information relevant to damage accumulation and crack initiation.

  17. Characterization of metal additive manufacturing surfaces using synchrotron X-ray CT and micromechanical modeling

    NASA Astrophysics Data System (ADS)

    Kantzos, C. A.; Cunningham, R. W.; Tari, V.; Rollett, A. D.

    2017-12-01

    Characterizing complex surface topologies is necessary to understand stress concentrations created by rough surfaces, particularly those made via laser power-bed additive manufacturing (AM). Synchrotron-based X-ray microtomography (μ XCT ) of AM surfaces was shown to provide high resolution detail of surface features and near-surface porosity. Using the CT reconstructions to instantiate a micromechanical model indicated that surface notches and near-surface porosity both act as stress concentrators, while adhered powder carried little to no load. Differences in powder size distribution had no direct effect on the relevant surface features, nor on stress concentrations. Conventional measurements of surface roughness, which are highly influenced by adhered powder, are therefore unlikely to contain the information relevant to damage accumulation and crack initiation.

  18. Imidazolium-based ionic liquids used as additives in the nanolubrication of silicon surfaces.

    PubMed

    Amorim, Patrícia M; Ferraria, Ana M; Colaço, Rogério; Branco, Luís C; Saramago, Benilde

    2017-01-01

    In recent years, with the development of micro/nanoelectromechanical systems (MEMS/NEMS), the demand for efficient lubricants of silicon surfaces intensified. Although the use of ionic liquids (ILs) as additives to base oils in the lubrication of steel/steel or other types of metal/ metal tribological pairs has been investigated, the number of studies involving Si is very low. In this work, we tested imidazolium-based ILs as additives to the base oil polyethylene glycol (PEG) to lubricate Si surfaces. The friction coefficients were measured in a nanotribometer. The viscosity of the PEG + IL mixtures as well as their contact angles on the Si surface were measured. The topography and chemical composition of the substrates surfaces were determined with atomic force microscopy (AFM) and X-ray photoelectron spectroscopy (XPS), respectively. Due to the hygroscopic properties of PEG, the first step was to assess the effect of the presence of water. Then, a series of ILs based on the cations 1-ethyl-3-methylimidazolium [EMIM], 1-butyl-3-methylimidazolium [BMIM], 1-ethyl-3-vinylimidazolium [EVIM], 1-(2-hydroxyethyl)-3-methylimidazolium [C 2 OHMIM] and 1-allyl-3-methylimidazolium [AMIM] combined with the anions dicyanamide [DCA], trifluoromethanesulfonate [TfO], and ethylsulfate [EtSO 4 ] were added to dry PEG. All additives (2 wt %) led to a decrease in friction coefficient as well as an increase in viscosity (with the exception of [AMIM][TfO]) and improved the Si wettability. The additives based on the anion [EtSO 4 ] exhibited the most promising tribological behavior, which was attributed to the strong interaction with the Si surface ensuring the formation of a stable surface layer, which hinders the contact between the sliding surfaces.

  19. Modeling the growth and branching of plants: A simple rod-based model

    NASA Astrophysics Data System (ADS)

    Faruk Senan, Nur Adila; O'Reilly, Oliver M.; Tresierras, Timothy N.

    A rod-based model for plant growth and branching is developed in this paper. Specifically, Euler's theory of the elastica is modified to accommodate growth and remodeling. In addition, branching is characterized using a configuration force and evolution equations are postulated for the flexural stiffness and intrinsic curvature. The theory is illustrated with examples of multiple static equilibria of a branched plant and the remodeling and tip growth of a plant stem under gravitational loading.

  20. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  1. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  2. Boron nitride nanotube-based biosensing of various bacterium/viruses: continuum modelling-based simulation approach.

    PubMed

    Panchal, Mitesh B; Upadhyay, Sanjay H

    2014-09-01

    In this study, the feasibility of single walled boron nitride nanotube (SWBNNT)-based biosensors has been ensured considering the continuum modelling-based simulation approach, for mass-based detection of various bacterium/viruses. Various types of bacterium or viruses have been taken into consideration at the free-end of the cantilevered configuration of the SWBNNT, as a biosensor. Resonant frequency shift-based analysis has been performed with the adsorption of various bacterium/viruses considered as additional mass to the SWBNNT-based sensor system. The continuum mechanics-based analytical approach, considering effective wall thickness has been considered to validate the finite element method (FEM)-based simulation results, based on continuum volume-based modelling of the SWBNNT. As a systematic analysis approach, the FEM-based simulation results are found in excellent agreement with the analytical results, to analyse the SWBNNTs for their wide range of applications such as nanoresonators, biosensors, gas-sensors, transducers and so on. The obtained results suggest that by using the SWBNNT of smaller size the sensitivity of the sensor system can be enhanced and detection of the bacterium/virus having mass of 4.28 × 10⁻²⁴ kg can be effectively performed.

  3. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  4. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  5. A physiologically based toxicokinetic model for methylmercury in female American kestrels

    USGS Publications Warehouse

    Nichols, J.W.; Bennett, R.S.; Rossmann, R.; French, J.B.; Sappington, K.G.

    2010-01-01

    A physiologically based toxicokinetic (PBTK) model was developed to describe the uptake, distribution, and elimination of methylmercury (CH 3Hg) in female American kestrels. The model consists of six tissue compartments corresponding to the brain, liver, kidney, gut, red blood cells, and remaining carcass. Additional compartments describe the elimination of CH3Hg to eggs and growing feathers. Dietary uptake of CH 3Hg was modeled as a diffusion-limited process, and the distribution of CH3Hg among compartments was assumed to be mediated by the flow of blood plasma. To the extent possible, model parameters were developed using information from American kestrels. Additional parameters were based on measured values for closely related species and allometric relationships for birds. The model was calibrated using data from dietary dosing studies with American kestrels. Good agreement between model simulations and measured CH3Hg concentrations in blood and tissues during the loading phase of these studies was obtained by fitting model parameters that control dietary uptake of CH 3Hg and possible hepatic demethylation. Modeled results tended to underestimate the observed effect of egg production on circulating levels of CH3Hg. In general, however, simulations were consistent with observed patterns of CH3Hg uptake and elimination in birds, including the dominant role of feather molt. This model could be used to extrapolate CH 3Hg kinetics from American kestrels to other bird species by appropriate reassignment of parameter values. Alternatively, when combined with a bioenergetics-based description, the model could be used to simulate CH 3Hg kinetics in a long-term environmental exposure. ?? 2010 SETAC.

  6. Does the model of additive effect in placebo research still hold true? A narrative review

    PubMed Central

    Berger, Bettina; Weger, Ulrich; Heusser, Peter

    2017-01-01

    Personalised and contextualised care has been turned into a major demand by people involved in healthcare suggesting to move toward person-centred medicine. The assessment of person-centred medicine can be most effectively achieved if treatments are investigated using ‘with versus without’ person-centredness or integrative study designs. However, this assumes that the components of an integrative or person-centred intervention have an additive relationship to produce the total effect. Beecher’s model of additivity assumes an additive relation between placebo and drug effects and is thus presenting an arithmetic summation. So far, no review has been carried out assessing the validity of the additive model, which is to be questioned and more closely investigated in this review. Initial searches for primary studies were undertaken in July 2016 using Pubmed and Google Scholar. In order to find matching publications of similar magnitude for the comparison part of this review, corresponding matches for all included reviews were sought. A total of 22 reviews and 3 clinical and experimental studies fulfilled the inclusion criteria. The results pointed to the following factors actively questioning the additive model: interactions of various effects, trial design, conditioning, context effects and factors, neurobiological factors, mechanism of action, statistical factors, intervention-specific factors (alcohol, caffeine), side-effects and type of intervention. All but one of the closely assessed publications was questioning the additive model. A closer examination of study design is necessary. An attempt in a more systematic approach geared towards solutions could be a suggestion for future research in this field. PMID:28321318

  7. Does the model of additive effect in placebo research still hold true? A narrative review.

    PubMed

    Boehm, Katja; Berger, Bettina; Weger, Ulrich; Heusser, Peter

    2017-03-01

    Personalised and contextualised care has been turned into a major demand by people involved in healthcare suggesting to move toward person-centred medicine. The assessment of person-centred medicine can be most effectively achieved if treatments are investigated using 'with versus without' person-centredness or integrative study designs. However, this assumes that the components of an integrative or person-centred intervention have an additive relationship to produce the total effect. Beecher's model of additivity assumes an additive relation between placebo and drug effects and is thus presenting an arithmetic summation. So far, no review has been carried out assessing the validity of the additive model, which is to be questioned and more closely investigated in this review. Initial searches for primary studies were undertaken in July 2016 using Pubmed and Google Scholar. In order to find matching publications of similar magnitude for the comparison part of this review, corresponding matches for all included reviews were sought. A total of 22 reviews and 3 clinical and experimental studies fulfilled the inclusion criteria. The results pointed to the following factors actively questioning the additive model: interactions of various effects, trial design, conditioning, context effects and factors, neurobiological factors, mechanism of action, statistical factors, intervention-specific factors (alcohol, caffeine), side-effects and type of intervention. All but one of the closely assessed publications was questioning the additive model. A closer examination of study design is necessary. An attempt in a more systematic approach geared towards solutions could be a suggestion for future research in this field.

  8. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  9. Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression

    ERIC Educational Resources Information Center

    Bearman, Sarah Kate; Stice, Eric

    2008-01-01

    Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…

  10. Shell model for drag reduction with polymer additives in homogeneous turbulence.

    PubMed

    Benzi, Roberto; De Angelis, Elisabetta; Govindarajan, Rama; Procaccia, Itamar

    2003-07-01

    Recent direct numerical simulations of the finite-extensibility nonlinear elastic dumbbell model with the Peterlin approximation of non-Newtonian hydrodynamics revealed that the phenomenon of drag reduction by polymer additives exists (albeit in reduced form) also in homogeneous turbulence. We use here a simple shell model for homogeneous viscoelastic flows, which recaptures the essential observations of the full simulations. The simplicity of the shell model allows us to offer a transparent explanation of the main observations. It is shown that the mechanism for drag reduction operates mainly on large scales. Understanding the mechanism allows us to predict how the amount of drag reduction depends on the various parameters in the model. The main conclusion is that drag reduction is not a universal phenomenon; it peaks in a window of parameters such as the Reynolds number and the relaxation rate of the polymer.

  11. A game theory-based trust measurement model for social networks.

    PubMed

    Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong

    2016-01-01

    In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.

  12. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  13. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  14. Modeling acute respiratory illness during the 2007 San Diego wildland fires using a coupled emissions-transport system and generalized additive modeling.

    PubMed

    Thelen, Brian; French, Nancy H F; Koziol, Benjamin W; Billmire, Michael; Owen, Robert Chris; Johnson, Jeffrey; Ginsberg, Michele; Loboda, Tatiana; Wu, Shiliang

    2013-11-05

    A study of the impacts on respiratory health of the 2007 wildland fires in and around San Diego County, California is presented. This study helps to address the impact of fire emissions on human health by modeling the exposure potential of proximate populations to atmospheric particulate matter (PM) from vegetation fires. Currently, there is no standard methodology to model and forecast the potential respiratory health effects of PM plumes from wildland fires, and in part this is due to a lack of methodology for rigorously relating the two. The contribution in this research specifically targets that absence by modeling explicitly the emission, transmission, and distribution of PM following a wildland fire in both space and time. Coupled empirical and deterministic models describing particulate matter (PM) emissions and atmospheric dispersion were linked to spatially explicit syndromic surveillance health data records collected through the San Diego Aberration Detection and Incident Characterization (SDADIC) system using a Generalized Additive Modeling (GAM) statistical approach. Two levels of geographic aggregation were modeled, a county-wide regional level and division of the county into six sub regions. Selected health syndromes within SDADIC from 16 emergency departments within San Diego County relevant for respiratory health were identified for inclusion in the model. The model captured the variability in emergency department visits due to several factors by including nine ancillary variables in addition to wildfire PM concentration. The model coefficients and nonlinear function plots indicate that at peak fire PM concentrations the odds of a person seeking emergency care is increased by approximately 50% compared to non-fire conditions (40% for the regional case, 70% for a geographically specific case). The sub-regional analyses show that demographic variables also influence respiratory health outcomes from smoke. The model developed in this study allows a

  15. Modeling acute respiratory illness during the 2007 San Diego wildland fires using a coupled emissions-transport system and generalized additive modeling

    PubMed Central

    2013-01-01

    Background A study of the impacts on respiratory health of the 2007 wildland fires in and around San Diego County, California is presented. This study helps to address the impact of fire emissions on human health by modeling the exposure potential of proximate populations to atmospheric particulate matter (PM) from vegetation fires. Currently, there is no standard methodology to model and forecast the potential respiratory health effects of PM plumes from wildland fires, and in part this is due to a lack of methodology for rigorously relating the two. The contribution in this research specifically targets that absence by modeling explicitly the emission, transmission, and distribution of PM following a wildland fire in both space and time. Methods Coupled empirical and deterministic models describing particulate matter (PM) emissions and atmospheric dispersion were linked to spatially explicit syndromic surveillance health data records collected through the San Diego Aberration Detection and Incident Characterization (SDADIC) system using a Generalized Additive Modeling (GAM) statistical approach. Two levels of geographic aggregation were modeled, a county-wide regional level and division of the county into six sub regions. Selected health syndromes within SDADIC from 16 emergency departments within San Diego County relevant for respiratory health were identified for inclusion in the model. Results The model captured the variability in emergency department visits due to several factors by including nine ancillary variables in addition to wildfire PM concentration. The model coefficients and nonlinear function plots indicate that at peak fire PM concentrations the odds of a person seeking emergency care is increased by approximately 50% compared to non-fire conditions (40% for the regional case, 70% for a geographically specific case). The sub-regional analyses show that demographic variables also influence respiratory health outcomes from smoke. Conclusions The

  16. An Overview of Ni Base Additive Fabrication Technologies for Aerospace Applications (Preprint)

    DTIC Science & Technology

    2011-03-01

    fusion welding processes that have the ability to add filler material can be used as additive manufacturing processes . The majority of the work in the...Laser Additive Manufacturing (LAM) The LAM process uses a conventional laser welding heat source (CO2 or solid state laser) combined with a...wrought properties. The LAM process typically has a lower deposition rate (0.5 – 10 lbs/hr) compared to EB, PTA or TIG based processes , although as

  17. Fault diagnosis based on continuous simulation models

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  18. Dynamic Structure-Based Pharmacophore Model Development: A New and Effective Addition in the Histone Deacetylase 8 (HDAC8) Inhibitor Discovery

    PubMed Central

    Thangapandian, Sundarapandian; John, Shalini; Lee, Yuno; Kim, Songmi; Lee, Keun Woo

    2011-01-01

    Histone deacetylase 8 (HDAC8) is an enzyme involved in deacetylating the amino groups of terminal lysine residues, thereby repressing the transcription of various genes including tumor suppressor gene. The over expression of HDAC8 was observed in many cancers and thus inhibition of this enzyme has emerged as an efficient cancer therapeutic strategy. In an effort to facilitate the future discovery of HDAC8 inhibitors, we developed two pharmacophore models containing six and five pharmacophoric features, respectively, using the representative structures from two molecular dynamic (MD) simulations performed in Gromacs 4.0.5 package. Various analyses of trajectories obtained from MD simulations have displayed the changes upon inhibitor binding. Thus utilization of the dynamically-responded protein structures in pharmacophore development has the added advantage of considering the conformational flexibility of protein. The MD trajectories were clustered based on single-linkage method and representative structures were taken to be used in the pharmacophore model development. Active site complimenting structure-based pharmacophore models were developed using Discovery Studio 2.5 program and validated using a dataset of known HDAC8 inhibitors. Virtual screening of chemical database coupled with drug-like filter has identified drug-like hit compounds that match the pharmacophore models. Molecular docking of these hits reduced the false positives and identified two potential compounds to be used in future HDAC8 inhibitor design. PMID:22272142

  19. Imidazolium-based ionic liquids used as additives in the nanolubrication of silicon surfaces

    PubMed Central

    Amorim, Patrícia M; Ferraria, Ana M; Colaço, Rogério; Branco, Luís C

    2017-01-01

    In recent years, with the development of micro/nanoelectromechanical systems (MEMS/NEMS), the demand for efficient lubricants of silicon surfaces intensified. Although the use of ionic liquids (ILs) as additives to base oils in the lubrication of steel/steel or other types of metal/ metal tribological pairs has been investigated, the number of studies involving Si is very low. In this work, we tested imidazolium-based ILs as additives to the base oil polyethylene glycol (PEG) to lubricate Si surfaces. The friction coefficients were measured in a nanotribometer. The viscosity of the PEG + IL mixtures as well as their contact angles on the Si surface were measured. The topography and chemical composition of the substrates surfaces were determined with atomic force microscopy (AFM) and X-ray photoelectron spectroscopy (XPS), respectively. Due to the hygroscopic properties of PEG, the first step was to assess the effect of the presence of water. Then, a series of ILs based on the cations 1-ethyl-3-methylimidazolium [EMIM], 1-butyl-3-methylimidazolium [BMIM], 1-ethyl-3-vinylimidazolium [EVIM], 1-(2-hydroxyethyl)-3-methylimidazolium [C2OHMIM] and 1-allyl-3-methylimidazolium [AMIM] combined with the anions dicyanamide [DCA], trifluoromethanesulfonate [TfO], and ethylsulfate [EtSO4] were added to dry PEG. All additives (2 wt %) led to a decrease in friction coefficient as well as an increase in viscosity (with the exception of [AMIM][TfO]) and improved the Si wettability. The additives based on the anion [EtSO4] exhibited the most promising tribological behavior, which was attributed to the strong interaction with the Si surface ensuring the formation of a stable surface layer, which hinders the contact between the sliding surfaces. PMID:29046844

  20. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  1. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  2. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  3. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

    Treesearch

    John C. Byrne

    1993-01-01

    A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

  4. Application of Finite Element, Phase-field, and CALPHAD-based Methods to Additive Manufacturing of Ni-based Superalloys.

    PubMed

    Keller, Trevor; Lindwall, Greta; Ghosh, Supriyo; Ma, Li; Lane, Brandon M; Zhang, Fan; Kattner, Ursula R; Lass, Eric A; Heigel, Jarred C; Idell, Yaakov; Williams, Maureen E; Allen, Andrew J; Guyer, Jonathan E; Levine, Lyle E

    2017-10-15

    Numerical simulations are used in this work to investigate aspects of microstructure and microseg-regation during rapid solidification of a Ni-based superalloy in a laser powder bed fusion additive manufacturing process. Thermal modeling by finite element analysis simulates the laser melt pool, with surface temperatures in agreement with in situ thermographic measurements on Inconel 625. Geometric and thermal features of the simulated melt pools are extracted and used in subsequent mesoscale simulations. Solidification in the melt pool is simulated on two length scales. For the multicomponent alloy Inconel 625, microsegregation between dendrite arms is calculated using the Scheil-Gulliver solidification model and DICTRA software. Phase-field simulations, using Ni-Nb as a binary analogue to Inconel 625, produced microstructures with primary cellular/dendritic arm spacings in agreement with measured spacings in experimentally observed microstructures and a lesser extent of microsegregation than predicted by DICTRA simulations. The composition profiles are used to compare thermodynamic driving forces for nucleation against experimentally observed precipitates identified by electron and X-ray diffraction analyses. Our analysis lists the precipitates that may form from FCC phase of enriched interdendritic compositions and compares these against experimentally observed phases from 1 h heat treatments at two temperatures: stress relief at 1143 K (870 °C) or homogenization at 1423 K (1150 °C).

  5. USING ECO-EVOLUTIONARY INDIVIDUAL-BASED MODELS TO INVESTIGATE SPATIALLY-DEPENDENT PROCESSES IN CONSERVATION GENETICS

    EPA Science Inventory

    Eco-evolutionary population simulation models are powerful new forecasting tools for exploring management strategies for climate change and other dynamic disturbance regimes. Additionally, eco-evo individual-based models (IBMs) are useful for investigating theoretical feedbacks ...

  6. Hybrid attacks on model-based social recommender systems

    NASA Astrophysics Data System (ADS)

    Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao

    2017-10-01

    With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.

  7. Event-based soil loss models for construction sites

    NASA Astrophysics Data System (ADS)

    Trenouth, William R.; Gharabaghi, Bahram

    2015-05-01

    The elevated rates of soil erosion stemming from land clearing and grading activities during urban development, can result in excessive amounts of eroded sediments entering waterways and causing harm to the biota living therein. However, construction site event-based soil loss simulations - required for reliable design of erosion and sediment controls - are one of the most uncertain types of hydrologic models. This study presents models with improved degree of accuracy to advance the design of erosion and sediment controls for construction sites. The new models are developed using multiple linear regression (MLR) on event-based permutations of the Universal Soil Loss Equation (USLE) and artificial neural networks (ANN). These models were developed using surface runoff monitoring datasets obtained from three sites - Greensborough, Cookstown, and Alcona - in Ontario and datasets mined from the literature for three additional sites - Treynor, Iowa, Coshocton, Ohio and Cordoba, Spain. The predictive MLR and ANN models can serve as both diagnostic and design tools for the effective sizing of erosion and sediment controls on active construction sites, and can be used for dynamic scenario forecasting when considering rapidly changing land use conditions during various phases of construction.

  8. Lewis base activation of Lewis acids: catalytic, enantioselective vinylogous aldol addition reactions.

    PubMed

    Denmark, Scott E; Heemstra, John R

    2007-07-20

    The generality of Lewis base catalyzed, Lewis acid mediated, enantioselective vinylogous aldol addition reactions has been investigated. The combination of silicon tetrachloride and chiral phosphoramides is a competent catalyst for highly selective additions of a variety of alpha,beta-unsaturated ketone-, 1,3-diketone-, and alpha,beta-unsaturated amide-derived dienolates to aldehydes. These reactions provided high levels of gamma-site selectivity for a variety of substitution patterns on the dienyl unit. Both ketone- and morpholine amide-derived dienol ethers afforded high enantio- and diastereoselectivity in the addition to conjugated aldehydes. Although alpha,beta-unsaturated ketone-derived dienolate did not react with aliphatic aldehydes, alpha,beta-unsaturated amide-derived dienolates underwent addition at reasonable rates affording high yields of vinylogous aldol product. The enantioselectivities achieved with the morpholine derived-dienolate in the addition to aliphatic aldehydes was the highest afforded to date with the silicon tetrachloride-chiral phosphoramide system. Furthermore, the ability to cleanly convert the morpholine amide to a methyl ketone was demonstrated.

  9. Nonlinear feedback in a six-dimensional Lorenz Model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-03-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the steamfunction is referred to as a secondary streamfunction mode, while the two additional modes, that appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  10. Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-12-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the streamfunction is referred to as a secondary streamfunction mode, while the two additional modes, which appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): "If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  11. Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling

    ERIC Educational Resources Information Center

    Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.

    2018-01-01

    The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…

  12. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  13. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  14. Modeling the Effect of Geomorphic Change Triggered by Large Wood Addition on Salmon Habitat in a Forested Coastal Watershed

    NASA Astrophysics Data System (ADS)

    Bair, R.; Segura, C.; Lorion, C.

    2015-12-01

    Large wood (LW) additions are often part of fish habitat restorations in the PNW where historic forest clear-cutting limited natural wood recruitment. These efforts' relative successes are rarely reported in terms of ecological significance to different life stages of fish. Understanding the effectiveness of LW additions will contribute to successfully managing forest land. In this study we quantify the geomorphic change of a restoration project involving LW additions to three alluvial reaches in Mill Creek, OR. The reaches are 110-130m in plane-bed morphology and drain 2-16km2. We quantify the change in available habitat to different life stages of coho salmon in terms of velocity (v), shear stress (t), flow depth, and grain size distributions (GSD) considering existing thresholds in the literature for acceptable habitat. Flow conditions before and after LW additions are assessed using a 2D hydrodynamic model (FaSTMECH). Model inputs include detailed channel topography, discharge, and surface GSD. The spatial-temporal variability of sediment transport was also quantified based the modeled t distributions and the GSD to document changes in the overall geomorphic regime. Initial modeling results for pre wood conditions show mean t and v values ranging between 0 and 26N/m2 and between 0 and 2.4m/s, respectively for up to bankfull flow (Qbf). The distributions of both t and v become progressively wider and peak at higher values as flow increases with the notable exception at Qbf for which the area of low velocity increases noticeably. The spatial distributions of velocity results indicates that the extent of suitable habitat for adult coho decreased by 18% between flows 30 and 55% of BF. However the area of suitable habitat increased by 15% between 0.55Qbf and Qbf as the flow spreads from the channel into the floodplain. We expect the LW will enhance floodplain connectivity and thus available habitat by creating additional areas of low v during winter flows.

  15. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  16. A Nonlinear Model for Gene-Based Gene-Environment Interaction.

    PubMed

    Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua

    2016-06-04

    A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.

  17. Biodegradability study of high-erucic-acid-rapeseed-oil-based lubricant additives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, E.; Crawford, R.L.; Shanahan, A.

    1995-12-31

    A variety of high-erucic-acid-rapeseed (HEAR)-oil-based lubricants, lubricant additives, and greases were examined for biodegradability at the University of Idaho Center for Hazardous Waste Remediation Research. Two standard biodegradability tests were employed, a currently accepted US Environmental Protection Agency (EPA) protocol and the Sturm Test. As is normal for tests that employ variable inocula such as sewage as a source of microorganisms, these procedures yielded variable results from one repetition to another. However, a general trend of rapid and complete biodegradability of the HEAR-oil-based materials was observed.

  18. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  19. Metabolic modeling of energy balances in Mycoplasma hyopneumoniae shows that pyruvate addition increases growth rate.

    PubMed

    Kamminga, Tjerko; Slagman, Simen-Jan; Bijlsma, Jetta J E; Martins Dos Santos, Vitor A P; Suarez-Diez, Maria; Schaap, Peter J

    2017-10-01

    Mycoplasma hyopneumoniae is cultured on large-scale to produce antigen for inactivated whole-cell vaccines against respiratory disease in pigs. However, the fastidious nutrient requirements of this minimal bacterium and the low growth rate make it challenging to reach sufficient biomass yield for antigen production. In this study, we sequenced the genome of M. hyopneumoniae strain 11 and constructed a high quality constraint-based genome-scale metabolic model of 284 chemical reactions and 298 metabolites. We validated the model with time-series data of duplicate fermentation cultures to aim for an integrated model describing the dynamic profiles measured in fermentations. The model predicted that 84% of cellular energy in a standard M. hyopneumoniae cultivation was used for non-growth associated maintenance and only 16% of cellular energy was used for growth and growth associated maintenance. Following a cycle of model-driven experimentation in dedicated fermentation experiments, we were able to increase the fraction of cellular energy used for growth through pyruvate addition to the medium. This increase in turn led to an increase in growth rate and a 2.3 times increase in the total biomass concentration reached after 3-4 days of fermentation, enhancing the productivity of the overall process. The model presented provides a solid basis to understand and further improve M. hyopneumoniae fermentation processes. Biotechnol. Bioeng. 2017;114: 2339-2347. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Model Scramjet Inlet Unstart Induced by Mass Addition and Heat Release

    NASA Astrophysics Data System (ADS)

    Im, Seong-Kyun; Baccarella, Damiano; McGann, Brendan; Liu, Qili; Wermer, Lydiy; Do, Hyungrok

    2015-11-01

    The inlet unstart phenomena in a model scramjet are investigated at an arc-heated hypersonic wind tunnel. The unstart induced by nitrogen or ethylene jets at low or high enthalpy Mach 4.5 freestream flow conditions are compared. The jet injection pressurizes the downstream flow by mass addition and flow blockage. In case of the ethylene jet injection, heat release from combustion increases the backpressure further. Time-resolved schlieren imaging is performed at the jet and the lip of the model inlet to visualize the flow features during unstart. High frequency pressure measurements are used to provide information on pressure fluctuation at the scramjet wall. In both of the mass and heat release driven unstart cases, it is observed that there are similar flow transient and quasi-steady behaviors of unstart shockwave system during the unstart processes. Combustion driven unstart induces severe oscillatory flow motions of the jet and the unstart shock at the lip of the scramjet inlet after the completion of the unstart process, while the unstarted flow induced by solely mass addition remains relatively steady. The discrepancies between the processes of mass and heat release driven unstart are explained by flow choking mechanism.

  1. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    PubMed Central

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  2. In-Situ monitoring and modeling of metal additive manufacturing powder bed fusion

    NASA Astrophysics Data System (ADS)

    Alldredge, Jacob; Slotwinski, John; Storck, Steven; Kim, Sam; Goldberg, Arnold; Montalbano, Timothy

    2018-04-01

    One of the major challenges in metal additive manufacturing is developing in-situ sensing and feedback control capabilities to eliminate build errors and allow qualified part creation without the need for costly and destructive external testing. Previously, many groups have focused on high fidelity numerical modeling and true temperature thermal imaging systems. These approaches require large computational resources or costly hardware that requires complex calibration and are difficult to integrate into commercial systems. In addition, due to the rapid change in the state of the material as well as its surface properties, getting true temperature is complicated and difficult. Here, we describe a different approach where we implement a low cost thermal imaging solution allowing for relative temperature measurements sufficient for detecting unwanted process variability. We match this with a faster than real time qualitative model that allows the process to be rapidly modeled during the build. The hope is to combine these two, allowing for the detection of anomalies in real time, enabling corrective action to potentially be taken, or parts to be stopped immediately after the error, saving material and time. Here we describe our sensor setup, its costs and abilities. We also show the ability to detect in real time unwanted process deviations. We also show that the output of our high speed model agrees qualitatively with experimental results. These results lay the groundwork for our vision of an integrated feedback and control scheme that combines low cost, easy to use sensors and fast modeling for process deviation monitoring.

  3. Improving accuracies of genomic predictions for drought tolerance in maize by joint modeling of additive and dominance effects in multi-environment trials.

    PubMed

    Dias, Kaio Olímpio Das Graças; Gezan, Salvador Alejandro; Guimarães, Claudia Teixeira; Nazarian, Alireza; da Costa E Silva, Luciano; Parentoni, Sidney Netto; de Oliveira Guimarães, Paulo Evaristo; de Oliveira Anoni, Carina; Pádua, José Maria Villela; de Oliveira Pinto, Marcos; Noda, Roberto Willians; Ribeiro, Carlos Alexandre Gomes; de Magalhães, Jurandir Vieira; Garcia, Antonio Augusto Franco; de Souza, João Cândido; Guimarães, Lauro José Moreira; Pastina, Maria Marta

    2018-07-01

    Breeding for drought tolerance is a challenging task that requires costly, extensive, and precise phenotyping. Genomic selection (GS) can be used to maximize selection efficiency and the genetic gains in maize (Zea mays L.) breeding programs for drought tolerance. Here, we evaluated the accuracy of genomic selection (GS) using additive (A) and additive + dominance (AD) models to predict the performance of untested maize single-cross hybrids for drought tolerance in multi-environment trials. Phenotypic data of five drought tolerance traits were measured in 308 hybrids along eight trials under water-stressed (WS) and well-watered (WW) conditions over two years and two locations in Brazil. Hybrids' genotypes were inferred based on their parents' genotypes (inbred lines) using single-nucleotide polymorphism markers obtained via genotyping-by-sequencing. GS analyses were performed using genomic best linear unbiased prediction by fitting a factor analytic (FA) multiplicative mixed model. Two cross-validation (CV) schemes were tested: CV1 and CV2. The FA framework allowed for investigating the stability of additive and dominance effects across environments, as well as the additive-by-environment and the dominance-by-environment interactions, with interesting applications for parental and hybrid selection. Results showed differences in the predictive accuracy between A and AD models, using both CV1 and CV2, for the five traits in both water conditions. For grain yield (GY) under WS and using CV1, the AD model doubled the predictive accuracy in comparison to the A model. Through CV2, GS models benefit from borrowing information of correlated trials, resulting in an increase of 40% and 9% in the predictive accuracy of GY under WS for A and AD models, respectively. These results highlight the importance of multi-environment trial analyses using GS models that incorporate additive and dominance effects for genomic predictions of GY under drought in maize single

  4. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  5. Septic tank additive impacts on microbial populations.

    PubMed

    Pradhan, S; Hoover, M T; Clark, G H; Gumpertz, M; Wollum, A G; Cobb, C; Strock, J

    2008-01-01

    Environmental health specialists, other onsite wastewater professionals, scientists, and homeowners have questioned the effectiveness of septic tank additives. This paper describes an independent, third-party, field scale, research study of the effects of three liquid bacterial septic tank additives and a control (no additive) on septic tank microbial populations. Microbial populations were measured quarterly in a field study for 12 months in 48 full-size, functioning septic tanks. Bacterial populations in the 48 septic tanks were statistically analyzed with a mixed linear model. Additive effects were assessed for three septic tank maintenance levels (low, intermediate, and high). Dunnett's t-test for tank bacteria (alpha = .05) indicated that none of the treatments were significantly different, overall, from the control at the statistical level tested. In addition, the additives had no significant effects on septic tank bacterial populations at any of the septic tank maintenance levels. Additional controlled, field-based research iswarranted, however, to address additional additives and experimental conditions.

  6. All-Atom Polarizable Force Field for DNA Based on the Classical Drude Oscillator Model

    PubMed Central

    Savelyev, Alexey; MacKerell, Alexander D.

    2014-01-01

    Presented is a first generation atomistic force field for DNA in which electronic polarization is modeled based on the classical Drude oscillator formalism. The DNA model is based on parameters for small molecules representative of nucleic acids, including alkanes, ethers, dimethylphosphate, and the nucleic acid bases and empirical adjustment of key dihedral parameters associated with the phosphodiester backbone, glycosidic linkages and sugar moiety of DNA. Our optimization strategy is based on achieving a compromise between satisfying the properties of the underlying model compounds in the gas phase targeting QM data and reproducing a number of experimental properties of DNA duplexes in the condensed phase. The resulting Drude force field yields stable DNA duplexes on the 100 ns time scale and satisfactorily reproduces (1) the equilibrium between A and B forms of DNA and (2) transitions between the BI and BII sub-states of B form DNA. Consistency with the gas phase QM data for the model compounds is significantly better for the Drude model as compared to the CHARMM36 additive force field, which is suggested to be due to the improved response of the model to changes in the environment associated with the explicit inclusion of polarizability. Analysis of dipole moments associated with the nucleic acid bases shows the Drude model to have significantly larger values than those present in CHARMM36, with the dipoles of individual bases undergoing significant variations during the MD simulations. Additionally, the dipole moment of water was observed to be perturbed in the grooves of DNA. PMID:24752978

  7. A mechanistic Individual-based Model of microbial communities.

    PubMed

    Jayathilake, Pahala Gedara; Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom

    2017-01-01

    Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for "bottom up" prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup.

  8. A mechanistic Individual-based Model of microbial communities

    PubMed Central

    Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A. Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom

    2017-01-01

    Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for “bottom up” prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup. PMID:28771505

  9. Data-driven multi-scale multi-physics models to derive process-structure-property relationships for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-05-01

    Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.

  10. Data-driven multi-scale multi-physics models to derive process-structure-property relationships for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-01-01

    Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.

  11. From in vitro to in vivo: Integration of the virtual cell based assay with physiologically based kinetic modelling.

    PubMed

    Paini, Alicia; Sala Benito, Jose Vicente; Bessems, Jos; Worth, Andrew P

    2017-12-01

    Physiologically based kinetic (PBK) models and the virtual cell based assay can be linked to form so called physiologically based dynamic (PBD) models. This study illustrates the development and application of a PBK model for prediction of estragole-induced DNA adduct formation and hepatotoxicity in humans. To address the hepatotoxicity, HepaRG cells were used as a surrogate for liver cells, with cell viability being used as the in vitro toxicological endpoint. Information on DNA adduct formation was taken from the literature. Since estragole induced cell damage is not directly caused by the parent compound, but by a reactive metabolite, information on the metabolic pathway was incorporated into the model. In addition, a user-friendly tool was developed by implementing the PBK/D model into a KNIME workflow. This workflow can be used to perform in vitro to in vivo extrapolation and forward as backward dosimetry in support of chemical risk assessment. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. A family of triaxial modified Hubble mass models: Effects of the additional radial functions

    NASA Astrophysics Data System (ADS)

    Das, Mousumi; Thakur, Parijat; Ann, H. B.

    2005-03-01

    The projected properties of triaxial generalization of the modified Hubble mass models are studied. These models are constructed by adding the additional radial functions, each multiplied by a low-order spherical harmonic, to the models of [Chakraborty, D.K., Thakur, P., 2000. MNRAS 318, 1273]. The projected surface density of mass models can be calculated analytically which allows us to derive the analytic expressions of axial ratio and position angle of major axis of constant density elliptical contours at asymptotic radii. The models are more general than those studied earlier in the sense that the inclusions of additional terms in density distribution, allow one to produce varieties of the radial profile of axial ratio and position angle, in particular, their small scale variations at inner radii. Strong correlations are found to exist between the observed axial ratio evaluated at 0.25Re and at 4Re which occupy well-separated regions in the parameter space for different choices of the intrinsic axial ratios. These correlations can be exploited to predict the intrinsic shape of the mass model, independent of the viewing angles. Using Bayesian statistics, the result of a test case launched for an estimation of the shape of a model galaxy is found to be satisfactory.

  13. Rain water transport and storage in a model sandy soil with hydrogel particle additives.

    PubMed

    Wei, Y; Durian, D J

    2014-10-01

    We study rain water infiltration and drainage in a dry model sandy soil with superabsorbent hydrogel particle additives by measuring the mass of retained water for non-ponding rainfall using a self-built 3D laboratory set-up. In the pure model sandy soil, the retained water curve measurements indicate that instead of a stable horizontal wetting front that grows downward uniformly, a narrow fingered flow forms under the top layer of water-saturated soil. This rain water channelization phenomenon not only further reduces the available rain water in the plant root zone, but also affects the efficiency of soil additives, such as superabsorbent hydrogel particles. Our studies show that the shape of the retained water curve for a soil packing with hydrogel particle additives strongly depends on the location and the concentration of the hydrogel particles in the model sandy soil. By carefully choosing the particle size and distribution methods, we may use the swollen hydrogel particles to modify the soil pore structure, to clog or extend the water channels in sandy soils, or to build water reservoirs in the plant root zone.

  14. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers

    PubMed Central

    Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.

    2018-01-01

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092

  15. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.

    PubMed

    Jiang, Yong; Schmidt, Renate H; Reif, Jochen C

    2018-05-04

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.

  16. Physiologically Based Pharmacokinetic Modeling Suggests Limited Drug–Drug Interaction Between Clopidogrel and Dasabuvir

    PubMed Central

    Fu, W; Badri, P; Bow, DAJ; Fischer, V

    2017-01-01

    Dasabuvir, a nonnucleoside NS5B polymerase inhibitor, is a sensitive substrate of cytochrome P450 (CYP) 2C8 with a potential for drug–drug interaction (DDI) with clopidogrel. A physiologically based pharmacokinetic (PBPK) model was developed for dasabuvir to evaluate the DDI potential with clopidogrel, the acyl‐β‐D glucuronide metabolite of which has been reported as a strong mechanism‐based inhibitor of CYP2C8 based on an interaction with repaglinide. In addition, the PBPK model for clopidogrel and its metabolite were updated with additional in vitro data. Sensitivity analyses using these PBPK models suggested that CYP2C8 inhibition by clopidogrel acyl‐β‐D glucuronide may not be as potent as previously suggested. The dasabuvir and updated clopidogrel PBPK models predict a moderate increase of 1.5–1.9‐fold for Cmax and 1.9–2.8‐fold for AUC of dasabuvir when coadministered with clopidogrel. While the PBPK results suggest there is a potential for DDI between dasabuvir and clopidogrel, the magnitude is not expected to be clinically relevant. PMID:28411400

  17. Improved Modeling in a Matlab-Based Navigation System

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Larimore, Wallace E.

    1999-01-01

    An innovative approach to autonomous navigation is available for low earth orbit satellites. The system is developed in Matlab and utilizes an Extended Kalman Filter (EKF) to estimate the attitude and trajectory based on spacecraft magnetometer and gyro data. Preliminary tests of the system with real spacecraft data from the Rossi X-Ray Timing Explorer Satellite (RXTE) indicate the existence of unmodeled errors in the magnetometer data. Incorporating into the EKF a statistical model that describes the colored component of the effective measurement of the magnetic field vector could improve the accuracy of the trajectory and attitude estimates and also improve the convergence time. This model is identified as a first order Markov process. With the addition of the model, the EKF attempts to identify the non-white components of the noise allowing for more accurate estimation of the original state vector, i.e. the orbital elements and the attitude. Working in Matlab allows for easy incorporation of new models into the EKF and the resulting navigation system is generic and can easily be applied to future missions resulting in an alternative in onboard or ground-based navigation.

  18. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for

  19. Marginal regression approach for additive hazards models with clustered current status data.

    PubMed

    Su, Pei-Fang; Chi, Yunchan

    2014-01-15

    Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Mathematical modeling of acid-base physiology

    PubMed Central

    Occhipinti, Rossana; Boron, Walter F.

    2015-01-01

    pH is one of the most important parameters in life, influencing virtually every biological process at the cellular, tissue, and whole-body level. Thus, for cells, it is critical to regulate intracellular pH (pHi) and, for multicellular organisms, to regulate extracellular pH (pHo). pHi regulation depends on the opposing actions of plasma-membrane transporters that tend to increase pHi, and others that tend to decrease pHi. In addition, passive fluxes of uncharged species (e.g., CO2, NH3) and charged species (e.g., HCO3− , NH4+) perturb pHi. These movements not only influence one another, but also perturb the equilibria of a multitude of intracellular and extracellular buffers. Thus, even at the level of a single cell, perturbations in acid-base reactions, diffusion, and transport are so complex that it is impossible to understand them without a quantitative model. Here we summarize some mathematical models developed to shed light onto the complex interconnected events triggered by acids-base movements. We then describe a mathematical model of a spherical cell–which to our knowledge is the first one capable of handling a multitude of buffer reaction–that our team has recently developed to simulate changes in pHi and pHo caused by movements of acid-base equivalents across the plasma membrane of a Xenopus oocyte. Finally, we extend our work to a consideration of the effects of simultaneous CO2 and HCO3− influx into a cell, and envision how future models might extend to other cell types (e.g., erythrocytes) or tissues (e.g., renal proximal-tubule epithelium) important for whole-body pH homeostasis. PMID:25617697

  1. Vocational High School Students’ Creativity in Food Additives with Problem Based Learning Approach

    NASA Astrophysics Data System (ADS)

    Ratnasari, D.; Supriyanti, T.; Rosbiono, M.

    2017-09-01

    The aim of this study is to verify the creativity of vocational students through Problem Based Learning approach in the food additives. The method which used quasi-experiment with one group posttest design. The research subjects were 32 students in grade XII of a vocational high school students courses chemical analysis in Bandung city. Instrument of creativity were essay, Student Worksheet, and observation sheets. Creativity measured include creative thinking skills and creative act skills. The results showed creative thinking skills and creative act skills are good. Research showed that the problem based learning approach can be applied to develop creativity of vocational students in the food additives well, because the students are given the opportunity to determine their own experiment procedure that will be used. It is recommended to often implement Problem Based Learning approach in other chemical concepts so that students’ creativity is sustainable.

  2. Regression analysis of clustered failure time data with informative cluster size under the additive transformation models.

    PubMed

    Chen, Ling; Feng, Yanqin; Sun, Jianguo

    2017-10-01

    This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.

  3. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  4. Analysis of habitat characteristics of small pelagic fish based on generalized additive models in Kepulauan Seribu Waters

    NASA Astrophysics Data System (ADS)

    Rivai, A. A.; Siregar, V. P.; Agus, S. B.; Yasuma, H.

    2018-03-01

    One of the required information for sustainable fisheries management is about the habitat characteristics of a fish species. This information can be used to map the distribution of fish and map the potential fishing ground. This study aimed to analyze the habitat characteristics of small pelagic fishes (anchovy, squid, sardine and scads) which were mainly caught by lift net in Kepulauan Seribu waters. Research on habitat characteristics had been widely done, but the use of total suspended solid (TSS) parameters in this analysis is still lacking. TSS parameter which was extracted from Landsat 8 along with five other oceanographic parameters, CPUE data and location of fishing ground data from lift net fisheries in Kepulauan Seribu were included in this analysis. This analysis used Generalized Additive Models (GAMs) to evaluate the relationship between CPUE and oceanographic parameters. The results of the analysis showed that each fish species had different habitat characteristics. TSS and sea surface height had a great influence on the value of CPUE from each species. All the oceanographic parameters affected the CPUE of each species. This study demonstrated the effective use of GAMs to identify the essential habitat of a fish species.

  5. Modeling of Micro Deval abrasion loss based on some rock properties

    NASA Astrophysics Data System (ADS)

    Capik, Mehmet; Yilmaz, Ali Osman

    2017-10-01

    Aggregate is one of the most widely used construction material. The quality of the aggregate is determined using some testing methods. Among these methods, the Micro Deval Abrasion Loss (MDAL) test is commonly used for the determination of the quality and the abrasion resistance of aggregate. The main objective of this study is to develop models for the prediction of MDAL from rock properties, including uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness, apparent porosity, void ratio Cerchar abrasivity index and Bohme abrasion test are examined. Additionally, the MDAL is modeled using simple regression analysis and multiple linear regression analysis based on the rock properties. The study shows that the MDAL decreases with the increase of uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness and Cerchar abrasivity index. It is also concluded that the MDAL increases with the increase of apparent porosity, void ratio and Bohme abrasion test. The modeling results show that the models based on Bohme abrasion test and L type Schmidt rebound hardness give the better forecasting performances for the MDAL. More models, including the uniaxial compressive strength, the apparent porosity and Cerchar abrasivity index, are developed for the rapid estimation of the MDAL of the rocks. The developed models were verified by statistical tests. Additionally, it can be stated that the proposed models can be used as a forecasting for aggregate quality.

  6. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  7. Modeling of digital mammograms using bicubic spline functions and additive noise

    NASA Astrophysics Data System (ADS)

    Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne

    1998-09-01

    The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.

  8. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  9. FlyBase: genes and gene models

    PubMed Central

    Drysdale, Rachel A.; Crosby, Madeline A.

    2005-01-01

    FlyBase (http://flybase.org) is the primary repository of genetic and molecular data of the insect family Drosophilidae. For the most extensively studied species, Drosophila melanogaster, a wide range of data are presented in integrated formats. Data types include mutant phenotypes, molecular characterization of mutant alleles and aberrations, cytological maps, wild-type expression patterns, anatomical images, transgenic constructs and insertions, sequence-level gene models and molecular classification of gene product functions. There is a growing body of data for other Drosophila species; this is expected to increase dramatically over the next year, with the completion of draft-quality genomic sequences of an additional 11 Drosphila species. PMID:15608223

  10. Designing Location-Based Learning Experiences for People with Intellectual Disabilities and Additional Sensory Impairments

    ERIC Educational Resources Information Center

    Brown, David J.; McHugh, David; Standen, Penny; Evett, Lindsay; Shopland, Nick; Battersby, Steven

    2011-01-01

    The research reported here is part of a larger project which seeks to combine serious games (or games-based learning) with location-based services to help people with intellectual disabilities and additional sensory impairments to develop work based skills. Specifically this paper reports on where these approaches are combined to scaffold the…

  11. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Modeling languages for biochemical network simulation: reaction vs equation based approaches.

    PubMed

    Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya

    2010-01-01

    Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.

  13. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  14. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  15. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  16. Model-Based Sensor-Augmented Pump Therapy

    PubMed Central

    Grosman, Benyamin; Voskanyan, Gayane; Loutseiko, Mikhail; Roy, Anirban; Mehta, Aloke; Kurtz, Natalie; Parikh, Neha; Kaufman, Francine R.; Mastrototaro, John J.; Keenan, Barry

    2013-01-01

    Background In insulin pump therapy, optimization of bolus and basal insulin dose settings is a challenge. We introduce a new algorithm that provides individualized basal rates and new carbohydrate ratio and correction factor recommendations. The algorithm utilizes a mathematical model of blood glucose (BG) as a function of carbohydrate intake and delivered insulin, which includes individualized parameters derived from sensor BG and insulin delivery data downloaded from a patient’s pump. Methods A mathematical model of BG as a function of carbohydrate intake and delivered insulin was developed. The model includes fixed parameters and several individualized parameters derived from the subject’s BG measurements and pump data. Performance of the new algorithm was assessed using n = 4 diabetic canine experiments over a 32 h duration. In addition, 10 in silico adults from the University of Virginia/Padova type 1 diabetes mellitus metabolic simulator were tested. Results The percentage of time in glucose range 80–180 mg/dl was 86%, 85%, 61%, and 30% using model-based therapy and [78%, 100%] (brackets denote multiple experiments conducted under the same therapy and animal model), [75%, 67%], 47%, and 86% for the control experiments for dogs 1 to 4, respectively. The BG measurements obtained in the simulation using our individualized algorithm were in 61–231 mg/dl min–max envelope, whereas use of the simulator’s default treatment resulted in BG measurements 90–210 mg/dl min–max envelope. Conclusions The study results demonstrate the potential of this method, which could serve as a platform for improving, facilitating, and standardizing insulin pump therapy based on a single download of data. PMID:23567006

  17. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants

    PubMed Central

    Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan

    2017-01-01

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487

  18. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-08-09

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.

  19. Multi-allelic haplotype model based on genetic partition for genomic prediction and variance component estimation using SNP markers.

    PubMed

    Da, Yang

    2015-12-18

    The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype

  20. The Martian Water Cycle Based on 3-D Modeling

    NASA Technical Reports Server (NTRS)

    Houben, H.; Haberle, R. M.; Joshi, M. M.

    1999-01-01

    Understanding the distribution of Martian water is a major goal of the Mars Surveyor program. However, until the bulk of the data from the nominal missions of TES, PMIRR, GRS, MVACS, and the DS2 probes are available, we are bound to be in a state where much of our knowledge of the seasonal behavior of water is based on theoretical modeling. We therefore summarize the results of this modeling at the present time. The most complete calculations come from a somewhat simplified treatment of the Martian climate system which is capable of simulating many decades of weather. More elaborate meteorological models are now being applied to study of the problem. The results show a high degree of consistency with observations of aspects of the Martian water cycle made by Viking MAWD, a large number of ground-based measurements of atmospheric column water vapor, studies of Martian frosts, and the widespread occurrence of water ice clouds. Additional information is contained in the original extended abstract.

  1. A heterobimetallic Ga/Yb-Schiff base complex for catalytic asymmetric alpha-addition of isocyanides to aldehydes.

    PubMed

    Mihara, Hisashi; Xu, Yingjie; Shepherd, Nicholas E; Matsunaga, Shigeki; Shibasaki, Masakatsu

    2009-06-24

    Development of a new heterobimetallic Ga(O-iPr)(3)/Yb(OTf)(3)/Schiff base 2d complex for catalytic asymmetric alpha-additions of isocyanides to aldehydes is described. Schiff base 2d derived from o-vanillin was suitable to utilize cationic rare earth metal triflates with good Lewis acidity in bimetallic Schiff base catalysis. The Ga(O-iPr)(3)/Yb(OTf)(3)/Schiff base 2d complex promoted asymmetric alpha-additions of alpha-isocyanoacetamides to aryl, heteroaryl, alkenyl, and alkyl aldehydes in good to excellent enantioselectivity (88-98% ee).

  2. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  3. Biocontrol of Listeria monocytogenes in a meat model using a combination of a bacteriocinogenic strain with curing additives.

    PubMed

    Orihuel, Alejandra; Bonacina, Julieta; Vildoza, María José; Bru, Elena; Vignolo, Graciela; Saavedra, Lucila; Fadda, Silvina

    2018-05-01

    The aim of this work was to evaluate the effect of meat curing agents on the bioprotective activity of the bacteriocinogenic strain, Enterococcus (E.) mundtii CRL35 against Listeria (L.) monocytogenes during meat fermentation. The ability of E. mundtii CRL35 to grow, acidify and produce bacteriocin in situ was assayed in a meat model system in the presence of curing additives (CA). E. mundtii CRL35 showed optimal growth and acidification rates in the presence of CA. More importantly, the highest bacteriocin titer was achieved in the presence of these food agents. In addition, the CA produced a statistical significant enhancement of the enterocin CRL35 activity. This positive effect was demonstrated in vitro in a meat based culture medium, by time-kill kinetics and finally by using a beaker sausage model with a challenge experiment with the pathogenic L. monocytogenes FBUNT strain. E. mundtii CRL35 was found to be a promising strain of use as a safety adjunct culture in meat industry and a novel functional supplement for sausage fermentation, ensuring hygiene and quality of the final product. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  5. PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR HUMAN EXPOSURES TO METHYL TERTIARY-BUTYL ETHER

    EPA Science Inventory

    Humans can be exposed by inhalation, ingestion, or dermal absorption to methyl tertiary-butyl ether (MTBE), an oxygenated fuel additive, from contaminated water sources. The purpose of this research was to develop a physiologically based pharmacokinetic model describing in human...

  6. Customer-centered careflow modeling based on guidelines.

    PubMed

    Huang, Biqing; Zhu, Peng; Wu, Cheng

    2012-10-01

    In contemporary society, customer-centered health care, which stresses customer participation and long-term tailored care, is inevitably becoming a trend. Compared with the hospital or physician-centered healthcare process, the customer-centered healthcare process requires more knowledge and modeling such a process is extremely complex. Thus, building a care process model for a special customer is cost prohibitive. In addition, during the execution of a care process model, the information system should have flexibility to modify the model so that it adapts to changes in the healthcare process. Therefore, supporting the process in a flexible, cost-effective way is a key challenge for information technology. To meet this challenge, first, we analyze various kinds of knowledge used in process modeling, illustrate their characteristics, and detail their roles and effects in careflow modeling. Secondly, we propose a methodology to manage a lifecycle of the healthcare process modeling, with which models could be built gradually with convenience and efficiency. In this lifecycle, different levels of process models are established based on the kinds of knowledge involved, and the diffusion strategy of these process models is designed. Thirdly, architecture and prototype of the system supporting the process modeling and its lifecycle are given. This careflow system also considers the compatibility of legacy systems and authority problems. Finally, an example is provided to demonstrate implementation of the careflow system.

  7. Can ligand addition to soil enhance Cd phytoextraction? A mechanistic model study.

    PubMed

    Lin, Zhongbing; Schneider, André; Nguyen, Christophe; Sterckeman, Thibault

    2014-11-01

    Phytoextraction is a potential method for cleaning Cd-polluted soils. Ligand addition to soil is expected to enhance Cd phytoextraction. However, experimental results show that this addition has contradictory effects on plant Cd uptake. A mechanistic model simulating the reaction kinetics (adsorption on solid phase, complexation in solution), transport (convection, diffusion) and root absorption (symplastic, apoplastic) of Cd and its complexes in soil was developed. This was used to calculate plant Cd uptake with and without ligand addition in a great number of combinations of soil, ligand and plant characteristics, varying the parameters within defined domains. Ligand addition generally strongly reduced hydrated Cd (Cd(2+)) concentration in soil solution through Cd complexation. Dissociation of Cd complex ([Formula: see text]) could not compensate for this reduction, which greatly lowered Cd(2+) symplastic uptake by roots. The apoplastic uptake of [Formula: see text] was not sufficient to compensate for the decrease in symplastic uptake. This explained why in the majority of the cases, ligand addition resulted in the reduction of the simulated Cd phytoextraction. A few results showed an enhanced phytoextraction in very particular conditions (strong plant transpiration with high apoplastic Cd uptake capacity), but this enhancement was very limited, making chelant-enhanced phytoextraction poorly efficient for Cd.

  8. Enhancing fullerene-based solar cell lifetimes by addition of a fullerene dumbbell.

    PubMed

    Schroeder, Bob C; Li, Zhe; Brady, Michael A; Faria, Gregório Couto; Ashraf, Raja Shahid; Takacs, Christopher J; Cowart, John S; Duong, Duc T; Chiu, Kar Ho; Tan, Ching-Hong; Cabral, João T; Salleo, Alberto; Chabinyc, Michael L; Durrant, James R; McCulloch, Iain

    2014-11-17

    Cost-effective, solution-processable organic photovoltaics (OPV) present an interesting alternative to inorganic silicon-based solar cells. However, one of the major remaining challenges of OPV devices is their lack of long-term operational stability, especially at elevated temperatures. The synthesis of a fullerene dumbbell and its use as an additive in the active layer of a PCDTBT:PCBM-based OPV device is reported. The addition of only 20 % of this novel fullerene not only leads to improved device efficiencies, but more importantly also to a dramatic increase in morphological stability under simulated operating conditions. Dynamic secondary ion mass spectrometry (DSIMS) and TEM are used, amongst other techniques, to elucidate the origins of the improved morphological stability. © 2014 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  9. Web-based reactive transport modeling using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the

  10. Improved spring model-based collaborative indoor visible light positioning

    NASA Astrophysics Data System (ADS)

    Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu

    2016-06-01

    Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.

  11. Quasi-additive estimates on the Hamiltonian for the one-dimensional long range Ising model

    NASA Astrophysics Data System (ADS)

    Littin, Jorge; Picco, Pierre

    2017-07-01

    In this work, we study the problem of getting quasi-additive bounds for the Hamiltonian of the long range Ising model, when the two-body interaction term decays proportionally to 1/d2 -α , α ∈(0,1 ) . We revisit the paper by Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] where they extend to the case α ∈[0 ,ln3/ln2 -1 ) the result of the existence of a phase transition by using a Peierls argument given by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)] for α =0 . The main arguments of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] are based in a quasi-additive decomposition of the Hamiltonian in terms of hierarchical structures called triangles and contours, which are related to the original definition of contours introduced by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)]. In this work, we study the existence of a quasi-additive decomposition of the Hamiltonian in terms of the contours defined in the work of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)]. The most relevant result obtained is Theorem 4.3 where we show that there is a quasi-additive decomposition for the Hamiltonian in terms of contours when α ∈[0,1 ) but not in terms of triangles. The fact that it cannot be a quasi-additive bound in terms of triangles lead to a very interesting maximization problem whose maximizer is related to a discrete Cantor set. As a consequence of the quasi-additive bounds, we prove that we can generalise the [Cassandro et al., J. Math. Phys. 46, 053305 (2005)] result, that is, a Peierls argument, to the whole interval α ∈[0,1 ) . We also state here the result of Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] about cluster expansions which implies that Theorem 2.4 that concerns interfaces and Theorem 2.5 that concerns n point truncated correlation functions in Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] are valid for all α ∈[0,1 ) instead of only α ∈[0 ,ln3/ln2 -1 ) .

  12. Single-image-based Modelling Architecture from a Historical Photograph

    NASA Astrophysics Data System (ADS)

    Dzwierzynska, Jolanta

    2017-10-01

    Historical photographs are proved to be very useful to provide a dimensional and geometrical analysis of buildings as well as to generate 3D reconstruction of the whole structure. The paper addresses the problem of single historical photograph analysis and modelling of an architectural object from it. Especially, it focuses on reconstruction of the original look of New-Town synagogue from the single historic photograph, when camera calibration is completely unknown. Due to the fact that the photograph faithfully followed the geometric rules of perspective, it was possible to develop and apply the method to obtain a correct 3D reconstruction of the building. The modelling process consisted of a series of familiar steps: feature extraction, determination of base elements of perspective, dimensional analyses and 3D reconstruction. Simple formulas were proposed in order to estimate location of characteristic points of the building in 3D Cartesian system of axes on the base of their location in 2D Cartesian system of axes. The reconstruction process proceeded well, although slight corrections were necessary. It was possible to reconstruct the shape of the building in general, and two of its facades in detail. The reconstruction of the other two facades requires some additional information or the additional picture. The success of the presented reconstruction method depends on the geometrical content of the photograph as well as quality of the picture, which ensures the legibility of building edges. The presented method of reconstruction is a combination of the descriptive method of reconstruction and computer aid; therefore, it seems to be universal. It can prove useful for single-image-based modelling architecture.

  13. Additive manufacturing: From implants to organs.

    PubMed

    Douglas, Tania S

    2014-05-12

    Additive manufacturing (AM) constructs 3D objects layer by layer under computer control from 3D models. 3D printing is one example of this kind of technology. AM offers geometric flexibility in its products and therefore allows customisation to suit individual needs. Clinical success has been shown with models for surgical planning, implants, assistive devices and scaffold-based tissue engineering. The use of AM to print tissues and organs that mimic nature in structure and function remains an elusive goal, but has the potential to transform personalised medicine, drug development and scientific understanding of the mechanisms of disease. 

  14. Sparse network-based models for patient classification using fMRI

    PubMed Central

    Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina

    2015-01-01

    Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459

  15. RuleMonkey: software for stochastic simulation of rule-based models

    PubMed Central

    2010-01-01

    Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of

  16. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erdemir, Ali

    This project was funded under the Department of Energy (DOE) Lab Call on Nanomanufacturing for Energy Efficiency and was directed toward the development of novel boron-based nanocolloidal lubrication additives for improving the friction and wear performance of machine components in a wide range of industrial and transportation applications. Argonne's research team concentrated on the scientific and technical aspects of the project, using a range of state-of-the art analytical and tribological test facilities. Argonne has extensive past experience and expertise in working with boron-based solid and liquid lubrication additives, and has intellectual property ownership of several. There were two industrial collaboratorsmore » in this project: Ashland Oil (represented by its Valvoline subsidiary) and Primet Precision Materials, Inc. (a leading nanomaterials company). There was also a sub-contract with the University of Arkansas. The major objectives of the project were to develop novel boron-based nanocolloidal lubrication additives and to optimize and verify their performance under boundary-lubricated sliding conditions. The project also tackled problems related to colloidal dispersion, larger-scale manufacturing and blending of nano-additives with base carrier oils. Other important issues dealt with in the project were determination of the optimum size and concentration of the particles and compatibility with various base fluids and/or additives. Boron-based particulate additives considered in this project included boric acid (H{sub 3}BO{sub 3}), hexagonal boron nitride (h-BN), boron oxide, and borax. As part of this project, we also explored a hybrid MoS{sub 2} + boric acid formulation approach for more effective lubrication and reported the results. The major motivation behind this work was to reduce energy losses related to friction and wear in a wide spectrum of mechanical systems and thereby reduce our dependence on imported oil. Growing concern over

  17. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  18. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  20. Innovative Additive for Bitumen Based on Processed Fats

    NASA Astrophysics Data System (ADS)

    Babiak, Michał; Kosno, Jacek; Ratajczak, Maria; Zieliński, Krzysztof

    2017-10-01

    Various additives, admixtures and modifiers are used to improve technical properties and strength characteristics of building materials. Manufacturers of waterproofing materials, concrete, ceramics and bitumen have to use innovative, increasingly complex and costly additives, admixtures or modifiers. As a result, simple and inexpensive substances have been replaced by complex, long chain polymers, multi component resins or plastics. For economic and ecological reasons waste materials are more frequently used as additives, admixtures and modifiers. Nowadays the most commonly used physical modifiers of bitumen belong to the group of polymers - large molecular organic compounds of natural origin or being the result of planned chemical synthesis. Polymers are substances that do not chemically react with bitumen, they act as fillers or create a spatial network within bitumen (the so called physical cross-linking). The development of organic chemistry has allowed the synthesis of a number of substances chemically modifying bitumen. The most promising are heterocyclic organic compounds belonging to the group of imidazolines. The aim of the study presented in this paper was to demonstrate the suitability of processed natural and post-refining fat waste (diamidoamine dehydrate) as bitumen modifier. This paper discusses the impact of adding technical imidazoline on selected bitumen characteristics. Samples of bitumen 160/220, which is most commonly used for the production of waterproofing products, were analysed. For base bitumen and bitumen modified with technical imidazoline the following measurements were taken: measurement of the softening point by Ball and Ring method, determination of the breaking point by Fraass method and needle penetration measurement at 25°C. Later the samples were aged using TFOT laboratory method and the basic characteristics were determined again. The results showed that a small amount of imidazoline improved bitumen thermoplastic parameters at

  1. Amino-Acid Network Clique Analysis of Protein Mutation Non-Additive Effects: A Case Study of Lysozme.

    PubMed

    Ming, Dengming; Chen, Rui; Huang, He

    2018-05-10

    Optimizing amino-acid mutations in enzyme design has been a very challenging task in modern bio-industrial applications. It is well known that many successful designs often hinge on extensive correlations among mutations at different sites within the enzyme, however, the underpinning mechanism for these correlations is far from clear. Here, we present a topology-based model to quantitively characterize non-additive effects between mutations. The method is based on the molecular dynamic simulations and the amino-acid network clique analysis. It examines if the two mutation sites of a double-site mutation fall into to a 3-clique structure, and associates such topological property of mutational site spatial distribution with mutation additivity features. We analyzed 13 dual mutations of T4 phage lysozyme and found that the clique-based model successfully distinguishes highly correlated or non-additive double-site mutations from those additive ones whose component mutations have less correlation. We also applied the model to protein Eglin c whose structural topology is significantly different from that of T4 phage lysozyme, and found that the model can, to some extension, still identify non-additive mutations from additive ones. Our calculations showed that mutation non-additive effects may heavily depend on a structural topology relationship between mutation sites, which can be quantitatively determined using amino-acid network k -cliques. We also showed that double-site mutation correlations can be significantly altered by exerting a third mutation, indicating that more detailed physicochemical interactions should be considered along with the network clique-based model for better understanding of this elusive mutation-correlation principle.

  2. Modified signed-digit trinary addition using synthetic wavelet filter

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, K. M.; Razzaque, M. A.

    2000-09-01

    The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.

  3. Guide to APA-Based Models

    NASA Technical Reports Server (NTRS)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  4. Steady state phosphorus mass balance model during hemodialysis based on a pseudo one-compartment kinetic model.

    PubMed

    Leypoldt, John K; Agar, Baris U; Akonur, Alp; Gellens, Mary E; Culleton, Bruce F

    2012-11-01

    Mathematical models of phosphorus kinetics and mass balance during hemodialysis are in early development. We describe a theoretical phosphorus steady state mass balance model during hemodialysis based on a novel pseudo one-compartment kinetic model. The steady state mass balance model accounted for net intestinal absorption of phosphorus and phosphorus removal by both dialysis and residual kidney function. Analytical mathematical solutions were derived to describe time-dependent intradialytic and interdialytic serum phosphorus concentrations assuming hemodialysis treatments were performed symmetrically throughout a week. Results from the steady state phosphorus mass balance model are described for thrice weekly hemodialysis treatment prescriptions only. The analysis predicts 1) a minimal impact of dialyzer phosphorus clearance on predialysis serum phosphorus concentration using modern, conventional hemodialysis technology, 2) variability in the postdialysis-to-predialysis phosphorus concentration ratio due to differences in patient-specific phosphorus mobilization, and 3) the importance of treatment time in determining the predialysis serum phosphorus concentration. We conclude that a steady state phosphorus mass balance model can be developed based on a pseudo one-compartment kinetic model and that predictions from this model are consistent with previous clinical observations. The predictions from this mass balance model are theoretical and hypothesis-generating only; additional prospective clinical studies will be required for model confirmation.

  5. Quantitative evaluation of specific vulnerability to nitrate for groundwater resource protection based on process-based simulation model.

    PubMed

    Huan, Huan; Wang, Jinsheng; Zhai, Yuanzheng; Xi, Beidou; Li, Juan; Li, Mingxiao

    2016-04-15

    It has been proved that groundwater vulnerability assessment is an effective tool for groundwater protection. Nowadays, quantitative assessment methods for specific vulnerability are scarce due to limited cognition of complicated contaminant fate and transport processes in the groundwater system. In this paper, process-based simulation model for specific vulnerability to nitrate using 1D flow and solute transport model in the unsaturated vadose zone is presented for groundwater resource protection. For this case study in Jilin City of northeast China, rate constants of denitrification and nitrification as well as adsorption constants of ammonium and nitrate in the vadose zone were acquired by laboratory experiments. The transfer time at the groundwater table t50 was taken as the specific vulnerability indicator. Finally, overall vulnerability was assessed by establishing the relationship between groundwater net recharge, layer thickness and t50. The results suggested that the most vulnerable regions of Jilin City were mainly distributed in the floodplain of Songhua River and Mangniu River. The least vulnerable areas mostly appear in the second terrace and back of the first terrace. The overall area of low, relatively low and moderate vulnerability accounted for 76% of the study area, suggesting the relatively low possibility of suffering nitrate contamination. In addition, the sensitivity analysis showed that the most sensitive factors of specific vulnerability in the vadose zone included the groundwater net recharge rate, physical properties of soil medium and rate constants of nitrate denitrification. By validating the suitability of the process-based simulation model for specific vulnerability and comparing with index-based method by a group of integrated indicators, more realistic and accurate specific vulnerability mapping could be acquired by the process-based simulation model acquiring. In addition, the advantages, disadvantages, constraint conditions and

  6. The potential application of European market research data in dietary exposure modelling of food additives.

    PubMed

    Tennant, David Robin; Bruyninckx, Chris

    2018-03-01

    Consumer exposure assessments for food additives are incomplete without information about the proportions of foods in each authorised category that contain the additive. Such information has been difficult to obtain but the Mintel Global New Products Database (GNPD) provides information about product launches across Europe over the past 20 years. These data can be searched to identify products with specific additives listed on product labels and the numbers compared with total product launches for food and drink categories in the same database to determine the frequency of occurrence. There are uncertainties associated with the data but these can be managed by adopting a cautious and conservative approach. GNPD data can be mapped with authorised food categories and with food descriptions used in the EFSA Comprehensive European Food Consumption Surveys Database for exposure modelling. The data, when presented as percent occurrence, could be incorporated into the EFSA ANS Panel's 'brand-loyal/non-brand loyal exposure model in a quantitative way. Case studies of preservative, antioxidant, colour and sweetener additives showed that the impact of including occurrence data is greatest in the non-brand loyal scenario. Recommendations for future research include identifying occurrence data for alcoholic beverages, linking regulatory food codes, FoodEx and GNPD product descriptions, developing the use of occurrence data for carry-over foods and improving understanding of brand loyalty in consumer exposure models.

  7. Model-Based Linkage Analysis of a Quantitative Trait.

    PubMed

    Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H

    2017-01-01

    Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.

  8. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    NASA Astrophysics Data System (ADS)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  9. Circulation-based Modeling of Gravity Currents

    NASA Astrophysics Data System (ADS)

    Meiburg, E. H.; Borden, Z.

    2013-05-01

    Atmospheric and oceanic flows driven by predominantly horizontal density differences, such as sea breezes, thunderstorm outflows, powder snow avalanches, and turbidity currents, are frequently modeled as gravity currents. Efforts to develop simplified models of such currents date back to von Karman (1940), who considered a two-dimensional gravity current in an inviscid, irrotational and infinitely deep ambient. Benjamin (1968) presented an alternative model, focusing on the inviscid, irrotational flow past a gravity current in a finite-depth channel. More recently, Shin et al. (2004) proposed a model for gravity currents generated by partial-depth lock releases, considering a control volume that encompasses both fronts. All of the above models, in addition to the conservation of mass and horizontal momentum, invoke Bernoulli's law along some specific streamline in the flow field, in order to obtain a closed system of equations that can be solved for the front velocity as function of the current height. More recent computational investigations based on the Navier-Stokes equations, on the other hand, reproduce the dynamics of gravity currents based on the conservation of mass and momentum alone. We propose that it should therefore be possible to formulate a fundamental gravity current model without invoking Bernoulli's law. The talk will show that the front velocity of gravity currents can indeed be predicted as a function of their height from mass and momentum considerations alone, by considering the evolution of interfacial vorticity. This approach does not require information on the pressure field and therefore avoids the need for an energy closure argument such as those invoked by the earlier models. Predictions by the new theory are shown to be in close agreement with direct numerical simulation results. References Von Karman, T. 1940 The engineer grapples with nonlinear problems, Bull. Am. Math Soc. 46, 615-683. Benjamin, T.B. 1968 Gravity currents and related

  10. Model based inversion of ultrasound data in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R. A.

    2018-04-01

    Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of ultrasound interaction with defects in composites, to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of defect properties from analysis of measured ultrasound signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, in laminates displaying irregular surface geometry (roughness), as well as internal elastic heterogeneity (varying fiber density, porosity). Inversion of ultrasound data is demonstrated showing the quantitative extraction of delamination geometry and surface transmissivity. Additionally, data inversion is demonstrated for determination of surface roughness and internal heterogeneity, and the influence of these features on delamination characterization is examined. Estimation of porosity volume fraction is demonstrated when internal heterogeneity is attributed to porosity.

  11. Supramolecular Amino Acid Based Hydrogels: Probing the Contribution of Additive Molecules using NMR Spectroscopy

    PubMed Central

    Ramalhete, Susana M.; Nartowski, Karol P.; Sarathchandra, Nichola; Foster, Jamie S.; Round, Andrew N.; Angulo, Jesús

    2017-01-01

    Abstract Supramolecular hydrogels are composed of self‐assembled solid networks that restrict the flow of water. l‐Phenylalanine is the smallest molecule reported to date to form gel networks in water, and it is of particular interest due to its crystalline gel state. Single and multi‐component hydrogels of l‐phenylalanine are used herein as model materials to develop an NMR‐based analytical approach to gain insight into the mechanisms of supramolecular gelation. Structure and composition of the gel fibres were probed using PXRD, solid‐state NMR experiments and microscopic techniques. Solution‐state NMR studies probed the properties of free gelator molecules in an equilibrium with bound molecules. The dynamics of exchange at the gel/solution interfaces was investigated further using high‐resolution magic angle spinning (HR‐MAS) and saturation transfer difference (STD) NMR experiments. This approach allowed the identification of which additive molecules contributed in modifying the material properties. PMID:28401991

  12. Feasibility and Scaling of Composite Based Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuttall, David; Chen, Xun; Kunc, Vlastimil

    2016-04-27

    Engineers and Researchers at Oak Ridge National Lab s Manufacturing Demonstration Facility (ORNL MDF) collaborated with Impossible Objects (IO) in the characterization of PEEK infused carbon fiber mat manufactured by means of CBAM composite-based additive manufacturing, a first generation assembly methodology developed by Robert Swartz, Chairman, Founder, and CTO of Impossible Objects.[1] The first phase of this project focused on demonstration of CBAM for composite tooling. The outlined steps focused on selecting an appropriate shape that fit the current machine s build envelope, characterized the resulting form, and presented next steps for transitioning to a Phase II CRADA agreement. Phasemore » I of collaborative research and development agreement NFE-15-05698 was initiated in April of 2015 with an introduction to Impossible Objects, and concluded in March of 2016 with a visitation to Impossible Objects headquarters in Chicago, IL. Phase II as discussed herein is under consideration by Impossible Objects as of this writing.« less

  13. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    PubMed

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  14. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    NASA Astrophysics Data System (ADS)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  15. Publishing web-based guidelines using interactive decision models.

    PubMed

    Sanders, G D; Nease, R F; Owens, D K

    2001-05-01

    Commonly used methods for guideline development and dissemination do not enable developers to tailor guidelines systematically to specific patient populations and update guidelines easily. We developed a web-based system, ALCHEMIST, that uses decision models and automatically creates evidence-based guidelines that can be disseminated, tailored and updated over the web. Our objective was to demonstrate the use of this system with clinical scenarios that provide challenges for guideline development. We used the ALCHEMIST system to develop guidelines for three clinical scenarios: (1) Chlamydia screening for adolescent women, (2) antiarrhythmic therapy for the prevention of sudden cardiac death; and (3) genetic testing for the BRCA breast-cancer mutation. ALCHEMIST uses information extracted directly from the decision model, combined with the additional information from the author of the decision model, to generate global guidelines. ALCHEMIST generated electronic web-based guidelines for each of the three scenarios. Using ALCHEMIST, we demonstrate that tailoring a guideline for a population at high-risk for Chlamydia changes the recommended policy for control of Chlamydia from contact tracing of reported cases to a population-based screening programme. We used ALCHEMIST to incorporate new evidence about the effectiveness of implantable cardioverter defibrillators (ICD) and demonstrate that the cost-effectiveness of use of ICDs improves from $74 400 per quality-adjusted life year (QALY) gained to $34 500 per QALY gained. Finally, we demonstrate how a clinician could use ALCHEMIST to incorporate a woman's utilities for relevant health states and thereby develop patient-specific recommendations for BRCA testing; the patient-specific recommendation improved quality-adjusted life expectancy by 37 days. The ALCHEMIST system enables guideline developers to publish both a guideline and an interactive decision model on the web. This web-based tool enables guideline developers

  16. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  17. DFM flow by using combination between design based metrology system and model based verification at sub-50nm memory device

    NASA Astrophysics Data System (ADS)

    Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong

    2007-03-01

    As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.

  18. Additional historical solid rocket motor burns

    NASA Astrophysics Data System (ADS)

    Wiedemann, Carsten; Homeister, Maren; Oswald, Michael; Stabroth, Sebastian; Klinkrad, Heiner; Vörsmann, Peter

    2009-06-01

    The use of orbital solid rocket motors (SRM) is responsible for the release of a high number of slag and Al 2O 3 dust particles which contribute to the space debris environment. This contribution has been modeled for the ESA space debris model MASTER (Meteoroid and Space Debris Terrestrial Environment Reference). The current model version, MASTER-2005, is based on the simulation of 1076 orbital SRM firings which mainly contributed to the long-term debris environment. SRM firings on very low earth orbits which produce only short living particles are not considered. A comparison of the modeled flux with impact data from returned surfaces shows that the shape and quantity of the modeled SRM dust distribution matches that of recent Hubble Space Telescope (HST) solar array measurements very well. However, the absolute flux level for dust is under-predicted for some of the analyzed Long Duration Exposure Facility (LDEF) surfaces. This indicates that some past SRM firings are not included in the current event database. Thus it is necessary to investigate, if additional historical SRM burns, like the retro-burn of low orbiting re-entry capsules, may be responsible for these dust impacts. The most suitable candidates for these firings are the large number of SRM retro-burns of return capsules. This paper focuses on the SRM retro-burns of Russian photoreconnaissance satellites, which were used in high numbers during the time of the LDEF mission. It is discussed which types of satellites and motors may have been responsible for this historical contribution. Altogether, 870 additional SRM retro-burns have been identified. An important task is the identification of such missions to complete the current event data base. Different types of motors have been used to de-orbit both large satellites and small film return capsules. The results of simulation runs are presented.

  19. A Novel Application of Agent-based Modeling: Projecting Water Access and Availability Using a Coupled Hydrologic Agent-based Model in the Nzoia Basin, Kenya

    NASA Astrophysics Data System (ADS)

    Le, A.; Pricope, N. G.

    2015-12-01

    Projections indicate that increasing population density, food production, and urbanization in conjunction with changing climate conditions will place stress on water resource availability. As a result, a holistic understanding of current and future water resource distribution is necessary for creating strategies to identify the most sustainable means of accessing this resource. Currently, most water resource management strategies rely on the application of global climate predictions to physically based hydrologic models to understand potential changes in water availability. However, the need to focus on understanding community-level social behaviors that determine individual water usage is becoming increasingly evident, as predictions derived only from hydrologic models cannot accurately represent the coevolution of basin hydrology and human water and land usage. Models that are better equipped to represent the complexity and heterogeneity of human systems and satellite-derived products in place of or in conjunction with historic data significantly improve preexisting hydrologic model accuracy and application outcomes. We used a novel agent-based sociotechnical model that combines the Soil and Water Assessment Tool (SWAT) and Agent Analyst and applied it in the Nzoia Basin, an area in western Kenya that is becoming rapidly urbanized and industrialized. Informed by a combination of satellite-derived products and over 150 household surveys, the combined sociotechnical model provided unique insight into how populations self-organize and make decisions based on water availability. In addition, the model depicted how population organization and current management alter water availability currently and in the future.

  20. Polarimetric subspace target detector for SAR data based on the Huynen dihedral model

    NASA Astrophysics Data System (ADS)

    Larson, Victor J.; Novak, Leslie M.

    1995-06-01

    Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.

  1. Review series: Examples of chronic care model: the home-based chronic care model: redesigning home health for high quality care delivery.

    PubMed

    Suter, Paula; Hennessey, Beth; Florez, Donna; Newton Suter, W

    2011-01-01

    Individuals with chronic obstructive pulmonary disease (COPD) face significant challenges due to frequent distressing dyspnea and deficits related to activities of daily living. Individuals with COPD are often hospitalized frequently for disease exacerbations, negatively impacting quality of life and healthcare expenditure burden. The home-based chronic care model (HBCCM) was designed to address the needs of patients with chronic diseases. This model facilitates the re-design of chronic care delivery within the home health sector by ensuring patient-centered evidence-based care. This HBCCM foundation is Dr. Edward Wagner s chronic care model and has four additional areas of focus: high touch delivery, theory-based self management, specialist oversight and the use of technology. This article will describe this model in detail and outline how model use for patients with COPD can bring value to stakeholders across the health care continuum.

  2. 78 FR 35115 - Listing of Color Additives Exempt From Certification; Mica-Based Pearlescent Pigments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-12

    ... (EDI) of the additive from all sources for both the mean and high- intake consumer to an acceptable daily intake (ADI) level established by toxicological data. The EDI is determined by projections based... the issuance of Sec. 73.350 we calculated a cumulative EDI (CEDI) for the use of mica-based...

  3. The influence of Pb addition on the properties of fly ash-based geopolymers.

    PubMed

    Nikolić, Violeta; Komljenović, Miroslav; Džunuzović, Nataša; Miladinović, Zoran

    2018-05-15

    Preventing or reducing negative effects on the environment from the waste landfilling is the main goal defined by the European Landfill Directive. Generally geopolymers can be considered as sustainable binders for immobilization of hazardous wastes containing different toxic elements. In this paper the influence of addition of high amount of lead on structure, strength, and leaching behavior (the effectiveness of Pb immobilization) of fly ash-based geopolymers depending on the geopolymer curing conditions was investigated. Lead was added during the synthesis of geopolymers in the form of highly soluble salt - lead-nitrate. Structural changes of geopolymers as a result of lead addition/immobilization were assessed by means of XRD, SEM/EDS, and 29 Si MAS NMR analysis. Investigated curing conditions significantly influenced structure, strength and leaching behavior of geopolymers. High addition of lead caused a sizeable decrease in compressive strength of geopolymers and promoted formation of aluminum-deficient aluminosilicate gel (depolymerization of aluminosilicate gel), regardless of the curing conditions investigated. According to the EUWAC limitations, 4% of lead was successfully immobilized by fly ash-based geopolymers cured for 28 days in a humid chamber at room temperature. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. 78 FR 12271 - Wireline Competition Bureau Seeks Additional Comment In Connect America Cost Model Virtual Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Competition Bureau seeks public input on additional questions relating to modeling voice capability and Annual... submitting comments and additional information on the rulemaking process, see the SUPPLEMENTARY INFORMATION section of this document. FOR FURTHER INFORMATION CONTACT: Katie King, Wireline Competition Bureau at (202...

  5. Customizing G Protein-coupled receptor models for structure-based virtual screening.

    PubMed

    de Graaf, Chris; Rognan, Didier

    2009-01-01

    This review will focus on the construction, refinement, and validation of G Protein-coupled receptor models for the purpose of structure-based virtual screening. Practical tips and tricks derived from concrete modeling and virtual screening exercises to overcome the problems and pitfalls associated with the different steps of the receptor modeling workflow will be presented. These examples will not only include rhodopsin-like (class A), but also secretine-like (class B), and glutamate-like (class C) receptors. In addition, the review will present a careful comparative analysis of current crystal structures and their implication on homology modeling. The following themes will be discussed: i) the use of experimental anchors in guiding the modeling procedure; ii) amino acid sequence alignments; iii) ligand binding mode accommodation and binding cavity expansion; iv) proline-induced kinks in transmembrane helices; v) binding mode prediction and virtual screening by receptor-ligand interaction fingerprint scoring; vi) extracellular loop modeling; vii) virtual filtering schemes. Finally, an overview of several successful structure-based screening shows that receptor models, despite structural inaccuracies, can be efficiently used to find novel ligands.

  6. A synchrotron study of microstructure gradient in laser additively formed epitaxial Ni-based superalloy

    DOE PAGES

    Xue, Jiawei; Zhang, Anfeng; Li, Yao; ...

    2015-10-08

    Laser additive forming is considered to be one of the promising techniques to repair single crystal Ni-based superalloy parts to extend their life and reduce the cost. Preservation of the single crystalline nature and prevention of thermal mechanical failure are two of the most essential issues for the application of this technique. Here we employ synchrotron X-ray microdiffraction to evaluate the quality in terms of crystal orientation and defect distribution of a Ni-based superalloy DZ125L directly formed by a laser additive process rooted from a single crystalline substrate of the same material. We show that a disorientation gradient caused bymore » a high density of geometrically necessary dislocations and resultant subgrains exists in the interfacial region between the epitaxial and stray grains. This creates a potential relationship of stray grain formation and defect accumulation. In conclusion, the observation offers new directions on the study of performance control and reliability of the laser additive manufactured superalloys.« less

  7. A synchrotron study of microstructure gradient in laser additively formed epitaxial Ni-based superalloy.

    PubMed

    Xue, Jiawei; Zhang, Anfeng; Li, Yao; Qian, Dan; Wan, Jingchun; Qi, Baolu; Tamura, Nobumichi; Song, Zhongxiao; Chen, Kai

    2015-10-08

    Laser additive forming is considered to be one of the promising techniques to repair single crystal Ni-based superalloy parts to extend their life and reduce the cost. Preservation of the single crystalline nature and prevention of thermal mechanical failure are two of the most essential issues for the application of this technique. Here we employ synchrotron X-ray microdiffraction to evaluate the quality in terms of crystal orientation and defect distribution of a Ni-based superalloy DZ125L directly formed by a laser additive process rooted from a single crystalline substrate of the same material. We show that a disorientation gradient caused by a high density of geometrically necessary dislocations and resultant subgrains exists in the interfacial region between the epitaxial and stray grains. This creates a potential relationship of stray grain formation and defect accumulation. The observation offers new directions on the study of performance control and reliability of the laser additive manufactured superalloys.

  8. A synchrotron study of microstructure gradient in laser additively formed epitaxial Ni-based superalloy

    PubMed Central

    Xue, Jiawei; Zhang, Anfeng; Li, Yao; Qian, Dan; Wan, Jingchun; Qi, Baolu; Tamura, Nobumichi; Song, Zhongxiao; Chen, Kai

    2015-01-01

    Laser additive forming is considered to be one of the promising techniques to repair single crystal Ni-based superalloy parts to extend their life and reduce the cost. Preservation of the single crystalline nature and prevention of thermal mechanical failure are two of the most essential issues for the application of this technique. Here we employ synchrotron X-ray microdiffraction to evaluate the quality in terms of crystal orientation and defect distribution of a Ni-based superalloy DZ125L directly formed by a laser additive process rooted from a single crystalline substrate of the same material. We show that a disorientation gradient caused by a high density of geometrically necessary dislocations and resultant subgrains exists in the interfacial region between the epitaxial and stray grains. This creates a potential relationship of stray grain formation and defect accumulation. The observation offers new directions on the study of performance control and reliability of the laser additive manufactured superalloys. PMID:26446425

  9. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  10. A Novel Marking Reader for Progressive Addition Lenses Based on Gabor Holography.

    PubMed

    Perucho, Beatriz; Picazo-Bueno, José Angel; Micó, Vicente

    2016-05-01

    Progressive addition lenses (PALs) are marked with permanent engraved marks (PEMs) at standardized locations. Permanent engraved marks are very useful through the manufacturing and mounting processes, act as locator marks to re-ink the removable marks, and contain useful information about the PAL. However, PEMs are often faint and weak, obscured by scratches, partially occluded, and difficult to recognize on tinted lenses or with antireflection or scratch-resistant coatings. The aim of this article is to present a new generation of portable marking reader based on an extremely simplified concept for visualization and identification of PEMs in PALs. Permanent engraved marks on different PALs are visualized using classical Gabor holography as underlying principle. Gabor holography allows phase sample visualization with adjustable magnification and can be implemented in either classical or digital versions. Here, visual Gabor holography is used to provide a magnified defocused image of the PEMs onto a translucent visualization screen where the PEM is clearly identified. Different types of PALs (conventional, personalized, old and scratched, sunglasses, etc.) have been tested to visualize PEMs with the proposed marking reader. The PEMs are visible in every case, and variable magnification factor can be achieved simply moving up and down the PAL in the instrument. In addition, a second illumination wavelength is also tested, showing the applicability of this novel marking reader for different illuminations. A new concept of marking reader ophthalmic instrument has been presented and validated in the laboratory. The configuration involves only a commercial-grade laser diode and a visualization screen for PEM identification. The instrument is portable, economic, and easy to use, and it can be used for identifying patient's current PAL model and for marking removable PALs again or finding test points regardless of the age of the PAL, its scratches, tints, or coatings.

  11. State-of-the-Art Review on Physiologically Based Pharmacokinetic Modeling in Pediatric Drug Development.

    PubMed

    Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T

    2018-05-18

    Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and

  12. Driver's mental workload prediction model based on physiological indices.

    PubMed

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  13. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  14. Modeling and prediction of peptide drift times in ion mobility spectrometry using sequence-based and structure-based approaches.

    PubMed

    Zhang, Yiming; Jin, Quan; Wang, Shuting; Ren, Ren

    2011-05-01

    The mobile behavior of 1481 peptides in ion mobility spectrometry (IMS), which are generated by protease digestion of the Drosophila melanogaster proteome, is modeled and predicted based on two different types of characterization methods, i.e. sequence-based approach and structure-based approach. In this procedure, the sequence-based approach considers both the amino acid composition of a peptide and the local environment profile of each amino acid in the peptide; the structure-based approach is performed with the CODESSA protocol, which regards a peptide as a common organic compound and generates more than 200 statistically significant variables to characterize the whole structure profile of a peptide molecule. Subsequently, the nonlinear support vector machine (SVM) and Gaussian process (GP) as well as linear partial least squares (PLS) regression is employed to correlate the structural parameters of the characterizations with the IMS drift times of these peptides. The obtained quantitative structure-spectrum relationship (QSSR) models are evaluated rigorously and investigated systematically via both one-deep and two-deep cross-validations as well as the rigorous Monte Carlo cross-validation (MCCV). We also give a comprehensive comparison on the resulting statistics arising from the different combinations of variable types with modeling methods and find that the sequence-based approach can give the QSSR models with better fitting ability and predictive power but worse interpretability than the structure-based approach. In addition, though the QSSR modeling using sequence-based approach is not needed for the preparation of the minimization structures of peptides before the modeling, it would be considerably efficient as compared to that using structure-based approach. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  16. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  17. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  18. A sEMG model with experimentally based simulation parameters.

    PubMed

    Wheeler, Katherine A; Shimada, Hiroshima; Kumar, Dinesh K; Arjunan, Sridhar P

    2010-01-01

    A differential, time-invariant, surface electromyogram (sEMG) model has been implemented. While it is based on existing EMG models, the novelty of this implementation is that it assigns more accurate distributions of variables to create realistic motor unit (MU) characteristics. Variables such as muscle fibre conduction velocity, jitter (the change in the interpulse interval between subsequent action potential firings) and motor unit size have been considered to follow normal distributions about an experimentally obtained mean. In addition, motor unit firing frequencies have been considered to have non-linear and type based distributions that are in accordance with experimental results. Motor unit recruitment thresholds have been considered to be related to the MU type. The model has been used to simulate single channel differential sEMG signals from voluntary, isometric contractions of the biceps brachii muscle. The model has been experimentally verified by conducting experiments on three subjects. Comparison between simulated signals and experimental recordings shows that the Root Mean Square (RMS) increases linearly with force in both cases. The simulated signals also show similar values and rates of change of RMS to the experimental signals.

  19. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  20. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  1. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  2. Eutectic Formation During Solidification of Ni-Based Single-Crystal Superalloys with Additional Carbon

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Ma, Dexin; Bührig-Polaczek, Andreas

    2017-11-01

    γ/ γ' eutectics' nucleation behavior during the solidification of a single-crystal superalloy with additional carbon was investigated by using directional solidification quenching method. The results show that the nucleation of the γ/ γ' eutectics can directly occur on the existing γ dendrites, directly in the remaining liquid, or on the primary MC-type carbides. The γ/γ' eutectics formed through the latter two mechanisms have different crystal orientations than that of the γ matrix. This suggests that the conventional Ni-based single-crystal superalloy castings with additional carbon only guarantee the monocrystallinity of the γ matrix and some γ/ γ' eutectics and, in addition to the carbides, there are other misoriented polycrystalline microstructures existing in macroscopically considered "single-crystal" superalloy castings.

  3. Docking-based classification models for exploratory toxicology ...

    EPA Pesticide Factsheets

    Background: Exploratory toxicology is a new emerging research area whose ultimate mission is that of protecting human health and environment from risks posed by chemicals. In this regard, the ethical and practical limitation of animal testing has encouraged the promotion of computational methods for the fast screening of huge collections of chemicals available on the market. Results: We derived 24 reliable docking-based classification models able to predict the estrogenic potential of a large collection of chemicals having high quality experimental data, kindly provided by the U.S. Environmental Protection Agency (EPA). The predictive power of our docking-based models was supported by values of AUC, EF1% (EFmax = 7.1), -LR (at SE = 0.75) and +LR (at SE = 0.25) ranging from 0.63 to 0.72, from 2.5 to 6.2, from 0.35 to 0.67 and from 2.05 to 9.84, respectively. In addition, external predictions were successfully made on some representative known estrogenic chemicals. Conclusion: We show how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Importantly, these methods enable one to employ the physicochemical information contained in the X-ray solved biological target and to screen structurally-unrelated chemicals. Shows how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Evaluation of 24 reliable dockin

  4. Model-based reasoning in the physics laboratory: Framework and initial results

    NASA Astrophysics Data System (ADS)

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to refine the framework and demonstrate its utility by documenting examples of model-based reasoning in the laboratory. When applied to the think-aloud interviews, the framework captures and differentiates students' model-based reasoning and helps identify areas of future research. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of model limitations, and revision. Finally, we document students' challenges in explicitly articulating assumptions when constructing models of experimental systems and further challenges in model construction due to students' insufficient prior conceptual understanding. A modeling perspective reframes many of the seemingly arbitrary technical details of measurement tools and apparatus as an opportunity for authentic and engaging scientific sense making.

  5. Sensor-Based Optimization Model for Air Quality Improvement in Home IoT.

    PubMed

    Kim, Jonghyuk; Hwangbo, Hyunwoo

    2018-03-23

    We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market.

  6. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    NASA Astrophysics Data System (ADS)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  7. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks

    PubMed Central

    Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.

    2015-01-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406

  8. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.

    PubMed

    Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M

    2015-09-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.

  9. Effects of waste glass additions on quality of textile sludge-based bricks.

    PubMed

    Rahman, Ari; Urabe, Takeo; Kishimoto, Naoyuki; Mizuhara, Shinji

    2015-01-01

    This research investigated the utilization of textile sludge as a substitute for clay in brick production. The addition of textile sludge to a brick specimen enhanced its pores, thus reducing the quality of the product. However, the addition of waste glass to brick production materials improved the quality of the brick in terms of both compressive strength and water absorption. Maximum compressive strength was observed with the following composition of waste materials: 30% textile sludge, 60% clay and 10% waste glass. The melting of waste glass clogged up pores on the brick, which improved water absorption performance and compressive strength. Moreover, a leaching test on a sludge-based brick to which 10% waste glass did not detect significant heavy metal compounds in leachates, with the product being in conformance with standard regulations. The recycling of textile sludge for brick production, when combined with waste glass additions, may thus be promising in terms of both product quality and environmental aspects.

  10. Research on manufacturing service behavior modeling based on block chain theory

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu

    2018-04-01

    According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.

  11. Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV

    NASA Astrophysics Data System (ADS)

    Savin, Dmitry; Kosov, Mikhail

    2017-09-01

    We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.

  12. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  13. Process-based Modeling of Ammonia Emission from Beef Cattle Feedyards with the Integrated Farm Systems Model.

    PubMed

    Waldrip, Heidi M; Rotz, C Alan; Hafner, Sasha D; Todd, Richard W; Cole, N Andy

    2014-07-01

    Ammonia (NH) volatilization from manure in beef cattle feedyards results in loss of agronomically important nitrogen (N) and potentially leads to overfertilization and acidification of aquatic and terrestrial ecosystems. In addition, NH is involved in the formation of atmospheric fine particulate matter (PM), which can affect human health. Process-based models have been developed to estimate NH emissions from various livestock production systems; however, little work has been conducted to assess their accuracy for large, open-lot beef cattle feedyards. This work describes the extension of an existing process-based model, the Integrated Farm Systems Model (IFSM), to include simulation of N dynamics in this type of system. To evaluate the model, IFSM-simulated daily per capita NH emission rates were compared with emissions data collected from two commercial feedyards in the Texas High Plains from 2007 to 2009. Model predictions were in good agreement with observations and were sensitive to variations in air temperature and dietary crude protein concentration. Predicted mean daily NH emission rates for the two feedyards had 71 to 81% agreement with observations. In addition, IFSM estimates of annual feedyard emissions were within 11 to 24% of observations, whereas a constant emission factor currently in use by the USEPA underestimated feedyard emissions by as much as 79%. The results from this study indicate that IFSM can quantify average feedyard NH emissions, assist with emissions reporting, provide accurate information for legislators and policymakers, investigate methods to mitigate NH losses, and evaluate the effects of specific management practices on farm nutrient balances. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  14. Stochastic agent-based modeling of tuberculosis in Canadian Indigenous communities.

    PubMed

    Tuite, Ashleigh R; Gallant, Victor; Randell, Elaine; Bourgeois, Annie-Claude; Greer, Amy L

    2017-01-13

    In Canada, active tuberculosis (TB) disease rates remain disproportionately higher among the Indigenous population, especially among the Inuit in the north. We used mathematical modeling to evaluate how interventions might enhance existing TB control efforts in a region of Nunavut. We developed a stochastic, agent-based model of TB transmission that captured the unique household and community structure. Evaluated interventions included: (i) rapid treatment of active cases; (ii) rapid contact tracing; (iii) expanded screening programs for latent TB infection (LTBI); and (iv) reduced household density. The outcomes of interest were incident TB infections and total diagnosed active TB disease over a 10- year time period. Model-projected incidence in the absence of additional interventions was highly variable (range: 33-369 cases) over 10 years. Compared to the 'no additional intervention' scenario, reducing the time between onset of active TB disease and initiation of treatment reduced both the number of new TB infections (47% reduction, relative risk of TB = 0.53) and diagnoses of active TB disease (19% reduction, relative risk of TB = 0.81). Expanding general population screening was also projected to reduce the burden of TB, although these findings were sensitive to assumptions around the relative amount of transmission occurring outside of households. Other potential interventions examined in the model (school-based screening, rapid contact tracing, and reduced household density) were found to have limited effectiveness. In a region of northern Canada experiencing a significant TB burden, more rapid treatment initiation in active TB cases was the most impactful intervention evaluated. Mathematical modeling can provide guidance for allocation of limited resources in a way that minimizes disease transmission and protects population health.

  15. The impact of design-based modeling instruction on seventh graders' spatial abilities and model-based argumentation

    NASA Astrophysics Data System (ADS)

    McConnell, William J.

    Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed

  16. Improving the Cold Temperature Properties of Tallow-Based Methyl Ester Mixtures Using Fractionation, Blending, and Additives

    NASA Astrophysics Data System (ADS)

    Elwell, Caleb

    biodiesel, only two of the additives had any significant effect on TME CP. The additive formulated by Meat & Livestock Australia (MLA) outperformed Evonik's Viscoplex 10-530. The MLA additive was investigated further and its effect on CP was characterized in pure TME and in CME/TME blends. When mixed in CME/TME blends, the MLA additive had a synergistic effect and produced lower CPs than the addition of mixing MLA in TME and blending CME with TME. To evalulate the cold temperature properties of TME blended with petroleum diesel, CPs of TME/diesel blends from 0 to 100% were measured. The TME/diesel blends were treated with the MLA additives to determine the effects of the additives under these blend conditions. The MLA additive also had a synergistic effect when mixed in TME/diesel blends. Finally, all three of the TME CP reduction methods were evaluated in an economic model to determine the conditions under which each method would be economically viable. Each of the CP reduction methods were compared using a common metric based on the cost of reducing the CP of 1 gallon of finished biodiesel by 1°C (i.e. $/gal/°C). Since the cost of each method is dependent on varying commodity prices, further development of the economic model (which was developed and tested with 2012 prices) to account for stochastic variation in commodity prices is recommended.

  17. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  18. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  19. Modeling Citation Networks Based on Vigorousness and Dormancy

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Wen; Zhang, Li-Jie; Yang, Guo-Hong; Xu, Xin-Jian

    2013-08-01

    In citation networks, the activity of papers usually decreases with age and dormant papers may be discovered and become fashionable again. To model this phenomenon, a competition mechanism is suggested which incorporates two factors: vigorousness and dormancy. Based on this idea, a citation network model is proposed, in which a node has two discrete stage: vigorous and dormant. Vigorous nodes can be deactivated and dormant nodes may be activated and become vigorous. The evolution of the network couples addition of new nodes and state transitions of old ones. Both analytical calculation and numerical simulation show that the degree distribution of nodes in generated networks displays a good right-skewed behavior. Particularly, scale-free networks are obtained as the deactivated vertex is target selected and exponential networks are realized for the random-selected case. Moreover, the measurement of four real-world citation networks achieves a good agreement with the stochastic model.

  20. Review of the systems biology of the immune system using agent-based models.

    PubMed

    Shinde, Snehal B; Kurhekar, Manish P

    2018-06-01

    The immune system is an inherent protection system in vertebrate animals including human beings that exhibit properties such as self-organisation, self-adaptation, learning, and recognition. It interacts with the other allied systems such as the gut and lymph nodes. There is a need for immune system modelling to know about its complex internal mechanism, to understand how it maintains the homoeostasis, and how it interacts with the other systems. There are two types of modelling techniques used for the simulation of features of the immune system: equation-based modelling (EBM) and agent-based modelling. Owing to certain shortcomings of the EBM, agent-based modelling techniques are being widely used. This technique provides various predictions for disease causes and treatments; it also helps in hypothesis verification. This study presents a review of agent-based modelling of the immune system and its interactions with the gut and lymph nodes. The authors also review the modelling of immune system interactions during tuberculosis and cancer. In addition, they also outline the future research directions for the immune system simulation through agent-based techniques such as the effects of stress on the immune system, evolution of the immune system, and identification of the parameters for a healthy immune system.

  1. Effect of conductive additives to gel electrolytes on activated carbon-based supercapacitors

    NASA Astrophysics Data System (ADS)

    Barzegar, Farshad; Dangbegnon, Julien K.; Bello, Abdulhakeem; Momodu, Damilola Y.; Johnson, A. T. Charlie; Manyala, Ncholu

    2015-09-01

    This article is focused on polymer based gel electrolyte due to the fact that polymers are cheap and can be used to achieve extended potential window for improved energy density of the supercapacitor devices when compared to aqueous electrolytes. Electrochemical characterization of a symmetric supercapacitor devices based on activated carbon in different polyvinyl alcohol (PVA) based gel electrolytes was carried out. The device exhibited a maximum energy density of 24 Wh kg-1 when carbon black was added to the gel electrolyte as conductive additive. The good energy density was correlated with the improved conductivity of the electrolyte medium which is favorable for fast ion transport in this relatively viscous environment. Most importantly, the device remained stable with no capacitance lost after 10,000 cycles.

  2. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  3. The extension of a DNA double helix by an additional Watson-Crick base pair on the same backbone.

    PubMed

    Kumar, Pawan; Sharma, Pawan K; Madsen, Charlotte S; Petersen, Michael; Nielsen, Poul

    2013-06-17

    Additional base pair: The DNA duplex can be extended with an additional Watson-Crick base pair on the same backbone by the use of double-headed nucleotides. These also work as compressed dinucleotides and form two base pairs with cognate nucleobases on the opposite strand. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  5. Rust preventive oil additives based on microbial fats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salenko, V.I.; Fedorov, V.V.; Kazantsev, Yu.E.

    1983-03-01

    This article investigates the composition and lubricating properties of microbial fats obtained from microorganisms grown on various hydrocarbon substrates (n-paraffins, alcohols, natural gas, petroleum distillates, etc.). Focuses on the protective functions of the 4 main fractions (unsaponifiables, free fatty acids, glycerides, and phospholipids) which comprise the microbial fat from a yeast grown on purified liquid n-paraffins. Concludes that neutralized microbial fats can be used as preservative additives; that the principal components of the microbial fats have the properties necessary for oil-soluble corrosion inhibitors; that the phospholipids of the microbial fat can fulfill the functions of not only preservative additives, butmore » also highly effective operational/ preservative additives; and that fats of microbial origin can be used in the development of multipurpose polyfunctional additives.« less

  6. Modelling of a holographic interferometry based calorimeter for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Beigzadeh, A. M.; Vaziri, M. R. Rashidian; Ziaie, F.

    2017-08-01

    In this research work, a model for predicting the behaviour of holographic interferometry based calorimeters for radiation dosimetry is introduced. Using this technique for radiation dosimetry via measuring the variations of refractive index due to energy deposition of radiation has several considerable advantages such as extreme sensitivity and ability of working without normally used temperature sensors that disturb the radiation field. We have shown that the results of our model are in good agreement with the experiments performed by other researchers under the same conditions. This model also reveals that these types of calorimeters have the additional and considerable merits of transforming the dose distribution to a set of discernible interference fringes.

  7. The use of generalised additive models (GAM) in dentistry.

    PubMed

    Helfenstein, U; Steiner, M; Menghini, G

    1997-12-01

    Ordinary multiple regression and logistic multiple regression are widely applied statistical methods which allow a researcher to 'explain' or 'predict' a response variable from a set of explanatory variables or predictors. In these models it is usually assumed that quantitative predictors such as age enter linearly into the model. During recent years these methods have been further developed to allow more flexibility in the way explanatory variables 'act' on a response variable. The methods are called 'generalised additive models' (GAM). The rigid linear terms characterising the association between response and predictors are replaced in an optimal way by flexible curved functions of the predictors (the 'profiles'). Plotting the 'profiles' allows the researcher to visualise easily the shape by which predictors 'act' over the whole range of values. The method facilitates detection of particular shapes such as 'bumps', 'U-shapes', 'J-shapes, 'threshold values' etc. Information about the shape of the association is not revealed by traditional methods. The shapes of the profiles may be checked by performing a Monte Carlo simulation ('bootstrapping'). After the presentation of the GAM a relevant case study is presented in order to demonstrate application and use of the method. The dependence of caries in primary teeth on a set of explanatory variables is investigated. Since GAMs may not be easily accessible to dentists, this article presents them in an introductory condensed form. It was thought that a nonmathematical summary and a worked example might encourage readers to consider the methods described. GAMs may be of great value to dentists in allowing visualisation of the shape by which predictors 'act' and obtaining a better understanding of the complex relationships between predictors and response.

  8. Ultimate strength performance of tankers associated with industry corrosion addition practices

    NASA Astrophysics Data System (ADS)

    Kim, Do Kyun; Kim, Han Byul; Zhang, Xiaoming; Li, Chen Guang; Paik, Jeom Kee

    2014-09-01

    In the ship and offshore structure design, age-related problems such as corrosion damage, local denting, and fatigue damage are important factors to be considered in building a reliable structure as they have a significant influence on the residual structural capacity. In shipping, corrosion addition methods are widely adopted in structural design to prevent structural capacity degradation. The present study focuses on the historical trend of corrosion addition rules for ship structural design and investigates their effects on the ultimate strength performance such as hull girder and stiffened panel of double hull oil tankers. Three types of rules based on corrosion addition models, namely historic corrosion rules (pre-CSR), Common Structural Rules (CSR), and harmonised Common Structural Rules (CSRH) are considered and compared with two other corrosion models namely UGS model, suggested by the Union of Greek Shipowners (UGS), and Time-Dependent Corrosion Wastage Model (TDCWM). To identify the general trend in the effects of corrosion damage on the ultimate longitudinal strength performance, the corrosion addition rules are applied to four representative sizes of double hull oil tankers namely Panamax, Aframax, Suezmax, and VLCC. The results are helpful in understanding the trend of corrosion additions for tanker structures

  9. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  10. Additivity vs Synergism: Investigation of the Additive Interaction of Cinnamon Bark Oil and Meropenem in Combinatory Therapy.

    PubMed

    Yang, Shun-Kai; Yusoff, Khatijah; Mai, Chun-Wai; Lim, Wei-Meng; Yap, Wai-Sum; Lim, Swee-Hua Erin; Lai, Kok-Song

    2017-11-04

    Combinatory therapies have been commonly applied in the clinical setting to tackle multi-drug resistant bacterial infections and these have frequently proven to be effective. Specifically, combinatory therapies resulting in synergistic interactions between antibiotics and adjuvant have been the main focus due to their effectiveness, sidelining the effects of additivity, which also lowers the minimal effective dosage of either antimicrobial agent. Thus, this study was undertaken to look at the effects of additivity between essential oils and antibiotic, via the use of cinnamon bark essential oil (CBO) and meropenem as a model for additivity. Comparisons between synergistic and additive interaction of CBO were performed in terms of the ability of CBO to disrupt bacterial membrane, via zeta potential measurement, outer membrane permeability assay and scanning electron microscopy. It has been found that the additivity interaction between CBO and meropenem showed similar membrane disruption ability when compared to those synergistic combinations which was previously reported. Hence, results based on our studies strongly suggest that additive interaction acts on a par with synergistic interaction. Therefore, further investigation in additive interaction between antibiotics and adjuvant should be performed for a more in depth understanding of the mechanism and the impacts of such interaction.

  11. Cost-effectiveness analysis of additional bevacizumab to pemetrexed plus cisplatin for malignant pleural mesothelioma based on the MAPS trial.

    PubMed

    Zhan, Mei; Zheng, Hanrui; Xu, Ting; Yang, Yu; Li, Qiu

    2017-08-01

    Malignant pleural mesothelioma (MPM) is a rare malignancy, and pemetrexed/cisplatin (PC) is the gold standard first-line regime. This study evaluated the cost-effectiveness of the addition of bevacizumab to PC (with maintenance bevacizumab) for unresectable MPM based on a phase III trial that showed a survival benefit compared with chemotherapy alone. To estimate the incremental cost-effectiveness ratio (ICER) of the incorporation of bevacizumab, a Markov model based on the MAPS trial, including the disease states of progression-free survival, progressive disease and death, was used. Total costs were calculated from a Chinese payer perspective, and health outcomes were converted into quality-adjusted life year (QALY). Model robustness was explored in sensitivity analyses. The addition of bevacizumab to PC was estimated to increase the cost by $81446.69, with a gain of 0.112 QALYs, resulting in an ICER of $727202.589 per QALY. In both one-way sensitivity and probabilistic sensitivity analyses, the ICER exceeded the commonly accepted willingness-to-pay threshold of 3 times the gross domestic product per capita of China ($23970.00 per QALY). The cost of bevacizumab had the most important impact on the ICER. The combination of bevacizumab with PC chemotherapy is not a cost-effective treatment option for MPM in China. Given its positive clinical value and extremely low incidence of MPM, an appropriate price discount, assistance programs and medical insurance should be considered to make bevacizumab more affordable for this rare patient population. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Genotypic variability-based genome-wide association study identifies non-additive loci HLA-C and IL12B for psoriasis.

    PubMed

    Wei, Wen-Hua; Massey, Jonathan; Worthington, Jane; Barton, Anne; Warren, Richard B

    2018-03-01

    Genome-wide association studies (GWASs) have identified a number of loci for psoriasis but largely ignored non-additive effects. We report a genotypic variability-based GWAS (vGWAS) that can prioritize non-additive loci without requiring prior knowledge of interaction types or interacting factors in two steps, using a mixed model to partition dichotomous phenotypes into an additive component and non-additive environmental residuals on the liability scale and then the Levene's (Brown-Forsythe) test to assess equality of the residual variances across genotype groups genome widely. The vGWAS identified two genome-wide significant (P < 5.0e-08) non-additive loci HLA-C and IL12B that were also genome-wide significant in an accompanying GWAS in the discovery cohort. Both loci were statistically replicated in vGWAS of an independent cohort with a small sample size. HLA-C and IL12B were reported in moderate gene-gene and/or gene-environment interactions in several occasions. We found a moderate interaction with age-of-onset of psoriasis, which was replicated indirectly. The vGWAS also revealed five suggestive loci (P < 6.76e-05) including FUT2 that was associated with psoriasis with environmental aspects triggered by virus infection and/or metabolic factors. Replication and functional investigation are needed to validate the suggestive vGWAS loci.

  13. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  14. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  15. Can agent based models effectively reduce fisheries management implementation uncertainty?

    NASA Astrophysics Data System (ADS)

    Drexler, M.

    2016-02-01

    Uncertainty is an inherent feature of fisheries management. Implementation uncertainty remains a challenge to quantify often due to unintended responses of users to management interventions. This problem will continue to plague both single species and ecosystem based fisheries management advice unless the mechanisms driving these behaviors are properly understood. Equilibrium models, where each actor in the system is treated as uniform and predictable, are not well suited to forecast the unintended behaviors of individual fishers. Alternatively, agent based models (AMBs) can simulate the behaviors of each individual actor driven by differing incentives and constraints. This study evaluated the feasibility of using AMBs to capture macro scale behaviors of the US West Coast Groundfish fleet. Agent behavior was specified at the vessel level. Agents made daily fishing decisions using knowledge of their own cost structure, catch history, and the histories of catch and quota markets. By adding only a relatively small number of incentives, the model was able to reproduce highly realistic macro patterns of expected outcomes in response to management policies (catch restrictions, MPAs, ITQs) while preserving vessel heterogeneity. These simulations indicate that agent based modeling approaches hold much promise for simulating fisher behaviors and reducing implementation uncertainty. Additional processes affecting behavior, informed by surveys, are continually being added to the fisher behavior model. Further coupling of the fisher behavior model to a spatial ecosystem model will provide a fully integrated social, ecological, and economic model capable of performing management strategy evaluations to properly consider implementation uncertainty in fisheries management.

  16. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  17. Modeling visual-based pitch, lift and speed control strategies in hoverflies

    PubMed Central

    Vercher, Jean-Louis

    2018-01-01

    To avoid crashing onto the floor, a free falling fly needs to trigger its wingbeats quickly and control the orientation of its thrust accurately and swiftly to stabilize its pitch and hence its speed. Behavioural data have suggested that the vertical optic flow produced by the fall and crossing the visual field plays a key role in this anti-crash response. Free fall behavior analyses have also suggested that flying insect may not rely on graviception to stabilize their flight. Based on these two assumptions, we have developed a model which accounts for hoverflies´ position and pitch orientation recorded in 3D with a fast stereo camera during experimental free falls. Our dynamic model shows that optic flow-based control combined with closed-loop control of the pitch suffice to stabilize the flight properly. In addition, our model sheds a new light on the visual-based feedback control of fly´s pitch, lift and thrust. Since graviceptive cues are possibly not used by flying insects, the use of a vertical reference to control the pitch is discussed, based on the results obtained on a complete dynamic model of a virtual fly falling in a textured corridor. This model would provide a useful tool for understanding more clearly how insects may or not estimate their absolute attitude. PMID:29361632

  18. A stochastic HMM-based forecasting model for fuzzy time series.

    PubMed

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  19. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai

    2011-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  20. Optimizing simulated fertilizer additions using a genetic algorithm with a nutrient uptake model

    Treesearch

    Wendell P. Cropper; N.B. Comerford

    2005-01-01

    Intensive management of pine plantations in the southeastern coastal plain typically involves weed and pest control, and the addition of fertilizer to meet the high nutrient demand of rapidly growing pines. In this study we coupled a mechanistic nutrient uptake model (SSAND, soil supply and nutrient demand) with a genetic algorithm (GA) in order to estimate the minimum...

  1. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    PubMed Central

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  2. Feature based Weld-Deposition for Additive Manufacturing of Complex Shapes

    NASA Astrophysics Data System (ADS)

    Panchagnula, Jayaprakash Sharma; Simhambhatla, Suryakumar

    2018-06-01

    Fabricating functional metal parts using Additive Manufacturing (AM) is a leading trend. However, realizing overhanging features has been a challenge due to the lack of support mechanism for metals. Powder-bed fusion techniques like, Selective Laser Sintering (SLS) employ easily-breakable-scaffolds made of the same material to realize the overhangs. However, the same approach is not extendible to deposition processes like laser or arc based direct energy deposition processes. Although it is possible to realize small overhangs by exploiting the inherent overhanging capability of the process or by blinding some small features like holes, the same cannot be extended for more complex geometries. The current work presents a novel approach for realizing complex overhanging features without the need of support structures. This is possible by using higher order kinematics and suitably aligning the overhang with the deposition direction. Feature based non-uniform slicing and non-uniform area-filling are some vital concepts required in realizing the same and are briefly discussed here. This method can be used to fabricate and/or repair fully dense and functional components for various engineering applications. Although this approach has been implemented for weld-deposition based system, the same can be extended to any other direct energy deposition processes also.

  3. Impact of model-based risk analysis for liver surgery planning.

    PubMed

    Hansen, C; Zidowitz, S; Preim, B; Stavrou, G; Oldhafer, K J; Hahn, H K

    2014-05-01

    A model-based risk analysis for oncologic liver surgery was described in previous work (Preim et al. in Proceedings of international symposium on computer assisted radiology and surgery (CARS), Elsevier, Amsterdam, pp. 353–358, 2002; Hansen et al. Int I Comput Assist Radiol Surg 4(5):469–474, 2009). In this paper, we present an evaluation of this method. To prove whether and how the risk analysis facilitates the process of liver surgery planning, an explorative user study with 10 liver experts was conducted. The purpose was to compare and analyze their decision-making. The results of the study show that model-based risk analysis enhances the awareness of surgical risk in the planning stage. Participants preferred smaller resection volumes and agreed more on the safety margins’ width in case the risk analysis was available. In addition, time to complete the planning task and confidence of participants were not increased when using the risk analysis. This work shows that the applied model-based risk analysis may influence important planning decisions in liver surgery. It lays a basis for further clinical evaluations and points out important fields for future research.

  4. 20 CFR 10.116 - What additional evidence is needed in cases based on occupational disease?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... based on occupational disease? 10.116 Section 10.116 Employees' Benefits OFFICE OF WORKERS' COMPENSATION... of Proof § 10.116 What additional evidence is needed in cases based on occupational disease? (a) The... particular occupational diseases. The medical report should also include the information specified on the...

  5. 20 CFR 10.116 - What additional evidence is needed in cases based on occupational disease?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... based on occupational disease? 10.116 Section 10.116 Employees' Benefits OFFICE OF WORKERS' COMPENSATION... of Proof § 10.116 What additional evidence is needed in cases based on occupational disease? (a) The... particular occupational diseases. The medical report should also include the information specified on the...

  6. 20 CFR 10.116 - What additional evidence is needed in cases based on occupational disease?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... based on occupational disease? 10.116 Section 10.116 Employees' Benefits OFFICE OF WORKERS' COMPENSATION... of Proof § 10.116 What additional evidence is needed in cases based on occupational disease? (a) The... particular occupational diseases. The medical report should also include the information specified on the...

  7. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Additive Manufacturing Design Considerations for Liquid Engine Components

    NASA Technical Reports Server (NTRS)

    Whitten, Dave; Hissam, Andy; Baker, Kevin; Rice, Darron

    2014-01-01

    The Marshall Space Flight Center's Propulsion Systems Department has gained significant experience in the last year designing, building, and testing liquid engine components using additive manufacturing. The department has developed valve, duct, turbo-machinery, and combustion device components using this technology. Many valuable lessons were learned during this process. These lessons will be the focus of this presentation. We will present criteria for selecting part candidates for additive manufacturing. Some part characteristics are 'tailor made' for this process. Selecting the right parts for the process is the first step to maximizing productivity gains. We will also present specific lessons we learned about feature geometry that can and cannot be produced using additive manufacturing machines. Most liquid engine components were made using a two-step process. The base part was made using additive manufacturing and then traditional machining processes were used to produce the final part. The presentation will describe design accommodations needed to make the base part and lessons we learned about which features could be built directly and which require the final machine process. Tolerance capabilities, surface finish, and material thickness allowances will also be covered. Additive Manufacturing can produce internal passages that cannot be made using traditional approaches. It can also eliminate a significant amount of manpower by reducing part count and leveraging model-based design and analysis techniques. Information will be shared about performance enhancements and design efficiencies we experienced for certain categories of engine parts.

  9. Adaptive Parameter Optimization of a Grid-based Conceptual Hydrological Model

    NASA Astrophysics Data System (ADS)

    Samaniego, L.; Kumar, R.; Attinger, S.

    2007-12-01

    Any spatially explicit hydrological model at the mesoscale is a conceptual approximation of the hydrological cycle and its dominant process occurring at this scale. Manual-expert calibration of this type of models may become quite tedious---if not impossible---taking into account the enormous amount of data required by these kind of models and the intrinsic uncertainty of both the data (input-output) and the model structure. Additionally, the model should be able to reproduce well several process which are accounted by a number of predefined objectives. As a consequence, some degree of automatic calibration would be required to find "good" solutions, each one constituting a trade-off among all calibration criteria. In other words, it is very likely that a number of parameter sets fulfil the optimization criteria and thus can be considered a model solution. In this study, we dealt with two research questions: 1) How to assess the adequate level of model complexity so that model overparameterization is avoided? And, 2) How to find a good solution with a relatively low computational burden? In the present study, a grid-based conceptual hydrological model denoted as HBV-UFZ based on some of the original HBV concepts was employed. This model was driven by 12~h precipitation, temperature, and PET grids which are acquired either from satellite products or from data of meteorological stations. In the latter case, the data was interpolated with external drift Kriging. The first research question was addressed in this study with the implementation of nonlinear transfer functions that regionalize most model parameters as a function of other spatially distributed observables such as land cover (time dependent) and other time independent basin characteristics such as soil type, slope, aspect, geological formations among others. The second question was addressed with an adaptive constrained optimization algorithm based on a parallel implementation of simulated annealing (SA

  10. The development of additive manufacturing technique for nickel-base alloys: A review

    NASA Astrophysics Data System (ADS)

    Zadi-Maad, Ahmad; Basuki, Arif

    2018-04-01

    Nickel-base alloys are an attractive alloy due to its excellent mechanical properties, a high resistance to creep deformation, corrosion, and oxidation. However, it is a hard task to control performance when casting or forging for this material. In recent years, additive manufacturing (AM) process has been implemented to replace the conventional directional solidification process for the production of nickel-base alloys. Due to its potentially lower cost and flexibility manufacturing process, AM is considered as a substitute technique for the existing. This paper provides a comprehensive review of the previous work related to the AM techniques for Ni-base alloys while highlighting current challenges and methods to solving them. The properties of conventionally manufactured Ni-base alloys are also compared with the AM fabricated alloys. The mechanical properties obtained from tension, hardness and fatigue test are included, along with discussions of the effect of post-treatment process. Recommendations for further work are also provided.

  11. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  12. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  13. 20 CFR 10.116 - What additional evidence is needed in cases based on occupational disease?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... based on occupational disease? 10.116 Section 10.116 Employees' Benefits OFFICE OF WORKERS' COMPENSATION... of Proof § 10.116 What additional evidence is needed in cases based on occupational disease? (a) The... occupational diseases. The medical report should also include the information specified on the checklist for...

  14. 20 CFR 10.116 - What additional evidence is needed in cases based on occupational disease?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... based on occupational disease? 10.116 Section 10.116 Employees' Benefits OFFICE OF WORKERS' COMPENSATION... of Proof § 10.116 What additional evidence is needed in cases based on occupational disease? (a) The... occupational diseases. The medical report should also include the information specified on the checklist for...

  15. Optimization of aeromedical base locations in New Mexico using a model that considers crash nodes and paths.

    PubMed

    Erdemir, Elif Tokar; Batta, Rajan; Spielman, Seth; Rogerson, Peter A; Blatt, Alan; Flanigan, Marie

    2008-05-01

    In a recent paper, Tokar Erdemir et al. (2008) introduce models for service systems with service requests originating from both nodes and paths. We demonstrate how to apply and extend their approach to an aeromedical base location application, with specific focus on the state of New Mexico (NM). The current aeromedical base locations of NM are selected without considering motor vehicle crash paths. Crash paths are the roads on which crashes occur, where each road segment has a weight signifying relative crash occurrence. We analyze the loss in accident coverage and location error for current aeromedical base locations. We also provide insights on the relevance of considering crash paths when selecting aeromedical base locations. Additionally, we look briefly at some of the tradeoff issues in locating additional trauma centers vs. additional aeromedical bases in the current aeromedical system of NM. Not surprisingly, tradeoff analysis shows that by locating additional aeromedical bases, we always attain the required coverage level with a lower cost than with locating additional trauma centers.

  16. Aggregation of gluten proteins in model dough after fibre polysaccharide addition.

    PubMed

    Nawrocka, Agnieszka; Szymańska-Chargot, Monika; Miś, Antoni; Wilczewska, Agnieszka Z; Markiewicz, Karolina H

    2017-09-15

    FT-Raman spectroscopy, thermogravimetry and differential scanning calorimetry were used to study changes in structure of gluten proteins and their thermal properties influenced by four dietary fibre polysaccharides (microcrystalline cellulose, inulin, apple pectin and citrus pectin) during development of a model dough. The flour reconstituted from wheat starch and wheat gluten was mixed with the polysaccharides in five concentrations: 3%, 6%, 9%, 12% and 18%. The obtained results showed that all polysaccharides induced similar changes in secondary structure of gluten proteins concerning formation of aggregates (1604cm -1 ), H-bonded parallel- and antiparallel-β-sheets (1690cm -1 ) and H-bonded β-turns (1664cm -1 ). These changes concerned mainly glutenins since β-structures are characteristic for them. The observed structural changes confirmed hypothesis about partial dehydration of gluten network after polysaccharides addition. The gluten aggregation and dehydration processes were also reflected in the DSC results, while the TGA ones showed that gluten network remained thermally stable after polysaccharides addition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  18. Comparison of risk assessment based on clinical judgement and Cariogram in addition to patient perceived treatment need.

    PubMed

    Hänsel Petersson, Gunnel; Åkerman, Sigvard; Isberg, Per-Erik; Ericson, Dan

    2016-07-07

    Predicting future risk for oral diseases, treatment need and prognosis are tasks performed daily in clinical practice. A large variety of methods have been reported, ranging from clinical judgement or "gut feeling" or even patient interviewing, to complex assessments of combinations of known risk factors. In clinical practice, there is an ongoing continuous search for less complicated and more valid tools for risk assessment. There is also a lack of knowledge how different common methods relates to one another. The aim of this study was to investigate if caries risk assessment (CRA) based on clinical judgement and the Cariogram model give similar results. In addition, to assess which factors from clinical status and history agree best with the CRA based on clinical judgement and how the patient's own perception of future oral treatment need correspond with the sum of examiners risk score. Clinical examinations were performed on randomly selected individuals 20-89 years old living in Skåne, Sweden. In total, 451 individuals were examined, 51 % women. The clinical examination included caries detection, saliva samples and radiographic examination together with history and a questionnaire. The examiners made a risk classification and the authors made a second risk calculation according to the Cariogram. For those assessed as low risk using the Cariogram 69 % also were assessed as low risk based on clinical judgement. For the other risk groups the agreement was lower. Clinical variables that significantly related to CRA based on clinical judgement were DS (decayed surfaces) and combining DS and incipient lesions, DMFT (decayed, missed, filled teeth), plaque amount, history and soft drink intake. Patients' perception of future oral treatment need correlated to some extent with the sum of examiners risk score. The main finding was that CRA based on clinical judgement and the Cariogram model gave similar results for the groups that were predicted at low level of future

  19. Model-based surgical planning and simulation of cranial base surgery.

    PubMed

    Abe, M; Tabuchi, K; Goto, M; Uchino, A

    1998-11-01

    Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.

  20. Additional Samples: Where They Should Be Located

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilger, G. G., E-mail: jfelipe@ufrgs.br; Costa, J. F. C. L.; Koppe, J. C.

    2001-09-15

    Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additionalmore » sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.« less

  1. Deep learning model-based algorithm for SAR ATR

    NASA Astrophysics Data System (ADS)

    Friedlander, Robert D.; Levy, Michael; Sudkamp, Elizabeth; Zelnio, Edmund

    2018-05-01

    Many computer-vision-related problems have successfully applied deep learning to improve the error rates with respect to classifying images. As opposed to optically based images, we have applied deep learning via a Siamese Neural Network (SNN) to classify synthetic aperture radar (SAR) images. This application of Automatic Target Recognition (ATR) utilizes an SNN made up of twin AlexNet-based Convolutional Neural Networks (CNNs). Using the processing power of GPUs, we trained the SNN with combinations of synthetic images on one twin and Moving and Stationary Target Automatic Recognition (MSTAR) measured images on a second twin. We trained the SNN with three target types (T-72, BMP2, and BTR-70) and have used a representative, synthetic model from each target to classify new SAR images. Even with a relatively small quantity of data (with respect to machine learning), we found that the SNN performed comparably to a CNN and had faster convergence. The results of processing showed the T-72s to be the easiest to identify, whereas the network sometimes mixed up the BMP2s and the BTR-70s. In addition we also incorporated two additional targets (M1 and M35) into the validation set. Without as much training (for example, one additional epoch) the SNN did not produce the same results as if all five targets had been trained over all the epochs. Nevertheless, an SNN represents a novel and beneficial approach to SAR ATR.

  2. An investigation of the mentalization-based model of borderline pathology in adolescents.

    PubMed

    Quek, Jeremy; Bennett, Clair; Melvin, Glenn A; Saeedi, Naysun; Gordon, Michael S; Newman, Louise K

    2018-07-01

    According to mentalization-based theory, transgenerational transmission of mentalization from caregiver to offspring is implicated in the pathogenesis of borderline personality disorder (BPD). Recent research has demonstrated an association between hypermentalizing (excessive, inaccurate mental state reasoning) and BPD, indicating the particular relevance of this form of mentalizing dysfunction to the transgenerational mentalization-based model. As yet, no study has empirically assessed a transgenerational mentalization-based model of BPD. The current study sought firstly to test the mentalization-based model, and additionally, to determine the form of mentalizing dysfunction in caregivers (e.g., hypo- or hypermentalizing) most relevant to a hypermentalizing model of BPD. Participants were a mixed sample of adolescents with BPD and a sample of non-clinical adolescents, and their respective primary caregivers (n = 102; 51 dyads). Using an ecologically valid measure of mentalization, mediational analyses were conducted to examine the relationships between caregiver mentalizing, adolescent mentalizing, and adolescent borderline features. Findings demonstrated that adolescent mentalization mediated the effect of caregiver mentalization on adolescent borderline personality pathology. Furthermore, results indicated that hypomentalizing in caregivers was related to adolescent borderline personality pathology via an effect on adolescent hypermentalizing. Results provide empirical support for the mentalization-based model of BPD, and suggest the indirect influence of caregiver mentalization on adolescent borderline psychopathology. Results further indicate the relevance of caregiver hypomentalizing to a hypermentalizing model of BPD. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. PID-based error signal modeling

    NASA Astrophysics Data System (ADS)

    Yohannes, Tesfay

    1997-10-01

    This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.

  4. The Individual, Joint, and Additive Interaction Associations of Aerobic-Based Physical Activity and Muscle Strengthening Activities on Metabolic Syndrome.

    PubMed

    Dankel, Scott J; Loenneke, Jeremy P; Loprinzi, Paul D

    2016-12-01

    Previous research has demonstrated that physical activity and muscle strengthening activities are independently and inversely associated with metabolic syndrome. Despite a number of studies examining the individual associations, only a few studies have examined the joint associations, and to our knowledge, no previous studies have examined the potential additive interaction of performing muscle strengthening activities and aerobic-based physical activity and their association with metabolic syndrome. Using data from the 2003 to 2006 National Health and Nutrition Examination Survey (NHANES), we computed three separate multivariable logistic regression models to examine the individual, combined, and additive interaction of meeting guidelines for accelerometer-assessed physical activity and self-reported muscle strengthening activities, and their association with metabolic syndrome. We found that individuals meeting physical activity and muscle strengthening activity guidelines, respectively, were at 61 and 25 % lower odds of having metabolic syndrome. Furthermore, individuals meeting both guidelines had the lowest odds of having metabolic syndrome (70 %), in part due to the additive interaction of performing both modes of exercise. In this national sample, accelerometer-assessed physical activity and muscle strengthening activities were synergistically associated with metabolic syndrome.

  5. Sensor-Based Optimization Model for Air Quality Improvement in Home IoT

    PubMed Central

    Kim, Jonghyuk

    2018-01-01

    We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market. PMID:29570684

  6. Mesh quality oriented 3D geometric vascular modeling based on parallel transport frame.

    PubMed

    Guo, Jixiang; Li, Shun; Chui, Yim Pan; Qin, Jing; Heng, Pheng Ann

    2013-08-01

    While a number of methods have been proposed to reconstruct geometrically and topologically accurate 3D vascular models from medical images, little attention has been paid to constantly maintain high mesh quality of these models during the reconstruction procedure, which is essential for many subsequent applications such as simulation-based surgical training and planning. We propose a set of methods to bridge this gap based on parallel transport frame. An improved bifurcation modeling method and two novel trifurcation modeling methods are developed based on 3D Bézier curve segments in order to ensure the continuous surface transition at furcations. In addition, a frame blending scheme is implemented to solve the twisting problem caused by frame mismatch of two successive furcations. A curvature based adaptive sampling scheme combined with a mesh quality guided frame tilting algorithm is developed to construct an evenly distributed, non-concave and self-intersection free surface mesh for vessels with distinct radius and high curvature. Extensive experiments demonstrate that our methodology can generate vascular models with better mesh quality than previous methods in terms of surface mesh quality criteria. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Supplier Selection Using Weighted Utility Additive Method

    NASA Astrophysics Data System (ADS)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  8. Applications of Metal Additive Manufacturing in Veterinary Orthopedic Surgery

    NASA Astrophysics Data System (ADS)

    Harrysson, Ola L. A.; Marcellin-Little, Denis J.; Horn, Timothy J.

    2015-03-01

    Veterinary medicine has undergone a rapid increase in specialization over the last three decades. Veterinarians now routinely perform joint replacement, neurosurgery, limb-sparing surgery, interventional radiology, radiation therapy, and other complex medical procedures. Many procedures involve advanced imaging and surgical planning. Evidence-based medicine has also become part of the modus operandi of veterinary clinicians. Modeling and additive manufacturing can provide individualized or customized therapeutic solutions to support the management of companion animals with complex medical problems. The use of metal additive manufacturing is increasing in veterinary orthopedic surgery. This review describes and discusses current and potential applications of metal additive manufacturing in veterinary orthopedic surgery.

  9. Drag reduction - Jet breakup correlation with kerosene-based additives

    NASA Technical Reports Server (NTRS)

    Hoyt, J. W.; Altman, R. L.; Taylor, J. J.

    1980-01-01

    The drag-reduction effectiveness of a number of high-polymer additives dissolved in aircraft fuel has been measured in a turbulent-flow rheometer. These solutions were further subjected to high elongational stress and breakup forces in a jet discharging in air. The jet was photographed using a high-resolution camera with special lighting. The object of the work was to study the possible spray-suppression ability of high-polymer additives to aircraft fuel and to correlate this with the drag-reducing properties of the additives. It was found, in fact, that the rheometer results indicate the most effective spray-suppressing additives. Using as a measure the minimum polymer concentration to give a maximum friction-reducing effect, the order of effectiveness of eight different polymer additives as spray-suppressing agents was predicted. These results may find application in the development of antimisting additives for aircraft fuel which may increase fire safety in case of crash or accident.

  10. Advances in High Temperature Materials for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Nordin, Nurul Amira Binti; Johar, Muhammad Akmal Bin; Ibrahim, Mohd Halim Irwan Bin; Marwah, Omar Mohd Faizan bin

    2017-08-01

    In today’s technology, additive manufacturing has evolved over the year that commonly known as 3D printing. Currently, additive manufacturing have been applied for many industries such as for automotive, aerospace, medical and other commercial product. The technologies are supported by materials for the manufacturing process to produce high quality product. Plus, additive manufacturing technologies has been growth from the lowest to moderate and high technology to fulfil manufacturing industries obligation. Initially from simple 3D printing such as fused deposition modelling (FDM), poly-jet, inkjet printing, to selective laser sintering (SLS), and electron beam melting (EBM). However, the high technology of additive manufacturing nowadays really needs high investment to carry out the process for fine products. There are three foremost type of material which is polymer, metal and ceramic used for additive manufacturing application, and mostly they were in the form of wire feedstock or powder. In circumstance, it is crucial to recognize the characteristics of each type of materials used in order to understand the behaviours of the materials on high temperature application via additive manufacturing. Therefore, this review aims to provide excessive inquiry and gather the necessary information for further research on additive material materials for high temperature application. This paper also proposed a new material based on powder glass, which comes from recycled tempered glass from automotive industry, having a huge potential to be applied for high temperature application. The technique proposed for additive manufacturing will minimize some cost of modelling with same quality of products compare to the others advanced technology used for high temperature application.

  11. Lewis base catalyzed aldol additions of chiral trichlorosilyl enolates and silyl enol ethers.

    PubMed

    Denmark, Scott E; Fujimori, Shinji; Pham, Son M

    2005-12-23

    [structures: see text] The consequences of double diastereodifferentiation in chiral Lewis base catalyzed aldol additions using chiral enoxysilanes derived from lactate, 3-hydroxyisobutyrate, and 3-hydroxybutyrate have been investigated. Trichlorosilyl enolates derived from the chiral methyl and ethyl ketones were subjected to aldolization in the presence of phosphoramides, and the intrinsic selectivity of these enolates and the external stereoinduction from chiral catalyst were studied. In the reactions with the lactate derived enolate, the strong internal stereoinduction dominated the stereochemical outcome of the aldol addition. For the 3-hydroxyisobutyrate- and 3-hydroxybutyrate derived enolates, the catalyst-controlled diastereoselectivities were observed, and the resident stereogenic centers exerted marginal influence. The corresponding trimethylsilyl enol ethers were employed in SiCl4/bisphosphoramide catalyzed aldol additions, and the effect of double diastereodifferentiation was also investigated. The overall diastereoselection of the process was again controlled by the strong external influence of the catalyst.

  12. A continuum-based structural modeling approach for cellulose nanocrystals (CNCs)

    NASA Astrophysics Data System (ADS)

    Shishehbor, Mehdi; Dri, Fernando L.; Moon, Robert J.; Zavattieri, Pablo D.

    2018-02-01

    We present a continuum-based structural model to study the mechanical behavior of cellulose nanocrystals (CNCs), and analyze the effect of bonded and non-bonded interactions on the mechanical properties under various loading conditions. In particular, this model assumes the uncoupling between the bonded and non-bonded interactions and their behavior is obtained from atomistic simulations. Our results indicates that the major contribution to the tensile and bending stiffness is mainly due to the cellulose chain stiffness, and the shear behavior is mainly governed by Van der Waals (VdW) forces. In addition, we report a negligible torsional stiffness, which may explain the CNC tendency to easily twist under very small or nonexistent torques. In addition, the sensitivity of geometrical imperfection on the mechanical properties using an analytical model of the CNC structure was investigated. Our results indicate that the presence of imperfections have a small influence on the majority of the elastic properties. Finally, it is shown that a simple homogeneous and orthotropic representation of a CNC under bending underestimates the contribution of non-bonded interaction leading up to 60% error in the calculation of the bending stiffness of CNCs. On the other hand, the proposed model can lead to more accurate predictions of the elastic behavior of CNCs. This is the first step toward the development of a more efficient model that can be used to model the inelastic behavior of single and multiple CNCs.

  13. Marker-Based Estimates Reveal Significant Non-additive Effects in Clonally Propagated Cassava (Manihot esculenta): Implications for the Prediction of Total Genetic Value and the Selection of Varieties.

    PubMed

    Wolfe, Marnin D; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc

    2016-08-31

    In clonally propagated crops, non-additive genetic effects can be effectively exploited by the identification of superior genetic individuals as varieties. Cassava (Manihot esculenta Crantz) is a clonally propagated staple food crop that feeds hundreds of millions. We quantified the amount and nature of non-additive genetic variation for three key traits in a breeding population of cassava from sub-Saharan Africa using additive and non-additive genome-wide marker-based relationship matrices. We then assessed the accuracy of genomic prediction for total (additive plus non-additive) genetic value. We confirmed previous findings based on diallel populations, that non-additive genetic variation is significant for key cassava traits. Specifically, we found that dominance is particularly important for root yield and epistasis contributes strongly to variation in CMD resistance. Further, we showed that total genetic value predicted observed phenotypes more accurately than additive only models for root yield but not for dry matter content, which is mostly additive or for CMD resistance, which has high narrow-sense heritability. We address the implication of these results for cassava breeding and put our work in the context of previous results in cassava, and other plant and animal species. Copyright © 2016 Author et al.

  14. Recovery Act: Web-based CO{sub 2} Subsurface Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolini, Christopher; Castillo, Jose

    2012-11-30

    The Web-based CO{sub 2} Subsurface Modeling project focused primarily on extending an existing text-only, command-line driven, isothermal and isobaric, geochemical reaction-transport simulation code, developed and donated by Sienna Geodynamics, into an easier-to-use Web-based application for simulating long-term storage of CO{sub 2} in geologic reservoirs. The Web-based interface developed through this project, publically accessible via URL http://symc.sdsu.edu/, enables rapid prototyping of CO{sub 2} injection scenarios and allows students without advanced knowledge of geochemistry to setup a typical sequestration scenario, invoke a simulation, analyze results, and then vary one or more problem parameters and quickly re-run a simulation to answer what-if questions.more » symc.sdsu.edu has 2x12 core AMD Opteron™ 6174 2.20GHz processors and 16GB RAM. The Web-based application was used to develop a new computational science course at San Diego State University, COMP 670: Numerical Simulation of CO{sub 2} Sequestration, which was taught during the fall semester of 2012. The purpose of the class was to introduce graduate students to Carbon Capture, Use and Storage (CCUS) through numerical modeling and simulation, and to teach students how to interpret simulation results to make predictions about long-term CO{sub 2} storage capacity in deep brine reservoirs. In addition to the training and education component of the project, significant software development efforts took place. Two computational science doctoral and one geological science masters student, under the direction of the PIs, extended the original code developed by Sienna Geodynamics, named Sym.8. New capabilities were added to Sym.8 to simulate non-isothermal and non-isobaric flows of charged aqueous solutes in porous media, in addition to incorporating HPC support into the code for execution on many-core XSEDE clusters. A successful outcome of this project was the funding and training of three new

  15. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    Treesearch

    Gretchen G. Moisen; Elizabeth A. Freeman; Jock A. Blackard; Tracey S. Frescino; Niklaus E. Zimmermann; Thomas C. Edwards

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's© See5 and Cubist (for binary and continuous responses,...

  17. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  18. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic

  19. Eukaryotic major facilitator superfamily transporter modeling based on the prokaryotic GlpT crystal structure.

    PubMed

    Lemieux, M Joanne

    2007-01-01

    The major facilitator superfamily (MFS) of transporters represents the largest family of secondary active transporters and has a diverse range of substrates. With structural information for four MFS transporters, we can see a strong structural commonality suggesting, as predicted, a common architecture for MFS transporters. The rate for crystal structure determination of MFS transporters is slow, making modeling of both prokaryotic and eukaryotic transporters more enticing. In this review, models of eukaryotic transporters Glut1, G6PT, OCT1, OCT2 and Pho84, based on the crystal structures of the prokaryotic GlpT, based on the crystal structure of LacY are discussed. The techniques used to generate the different models are compared. In addition, the validity of these models and the strategy of using prokaryotic crystal structures to model eukaryotic proteins are discussed. For comparison, E. coli GlpT was modeled based on the E. coli LacY structure and compared to the crystal structure of GlpT demonstrating that experimental evidence is essential for accurate modeling of membrane proteins.

  20. Effect of Alloying Additions on Oxidation Behaviors of Ni-Fe Based Superalloy for Ultra-Supercritical Boiler Applications

    NASA Astrophysics Data System (ADS)

    Lu, Jintao; Yang, Zhen; Zhao, Xinbao; Yan, Jingbo; Gu, Y.

    A new kind of Ni-Fe-based superalloy is designed recently for 750 °C-class A-USC boiler tube. The oxidation behavior of the designed alloys with various combinations of anti-oxidation additions, Cr, Al and Si, was investigated at 750 °C and 850 °C, respectively. The results indicated that the oxidation rate of tested alloys decreased with the increase of the sum of additions. Cr addition may drop the relative constant of parabolic rate greatly when temperature is raised. But the oxide scale, mainly consisted of NiCr spinel at 750 °C and NiCrMn spinel at 850 °C, was similar while the Cr content is in a range of 20-25 wt.% at tested temperatures. Al addition, however, showed the best effective to reduce the oxidation rates. Internal Al-rich oxide was observed at the scale/metal interface for alloys added with high content of Al and was increased with Al content increase. Very tiny difference between the oxide scales of the Si-added alloys was identified when Si content varies among 0.02-0.05 wt.%. Basing on these results, this presentation discussed the optimum combination of anti-oxidation additions as well as oxidation mechanisms in the designed Ni-Fe-base superalloy.

  1. Model-based position correlation between breast images

    NASA Astrophysics Data System (ADS)

    Georgii, J.; Zöhrer, F.; Hahn, H. K.

    2013-02-01

    Nowadays, breast diagnosis is based on images of different projections and modalities, such that sensitivity and specificity of the diagnosis can be improved. However, this emburdens radiologists to find corresponding locations in these data sets, which is a time consuming task, especially since the resolution of the images increases and thus more and more data have to be considered in the diagnosis. Therefore, we aim at support radiologist by automatically synchronizing cursor positions between different views of the breast. Specifically, we present an automatic approach to compute the spatial correlation between MLO and CC mammogram or tomosynthesis projections of the breast. It is based on pre-computed finite element simulations of generic breast models, which are adapted to the patient-specific breast using a contour mapping approach. Our approach is designed to be fully automatic and efficient, such that it can be implemented directly into existing multimodal breast workstations. Additionally, it is extendable to support other breast modalities in future, too.

  2. Engineering and Modeling Carbon Nanofiller-Based Scaffolds for Tissue Regeneration

    NASA Astrophysics Data System (ADS)

    Al Habis, Nuha Hamad

    Conductive biopolymers are starting to emerge as potential scaffolds of the future. These scaffolds exhibit some unique properties such as inherent conductivity, mechanical and surface properties. Traditionally, a conjugated polymer is used to constitute a conductive network. An alternative method currently being used is nanofillers as additives in the polymer. In this dissertation, we fabricated an intelligent scaffold for use in tissue engineering applications. The main idea was to enhance the mechanical, electrical properties and cell growth of scaffolds by using distinct types of nanofillers such as graphene, carbon nanofiber and carbon black. We identified the optimal concentrations of nano-additive in both fibrous and film scaffolds to obtain the highest mechanical and electrical properties without neglecting any of them. Lastly, we investigated the performance of these scaffold with cell biology. To accomplish these tasks, we first studied the mechanical properties of the scaffold as a function of morphology, concentration and variety of carbon nanofillers. Results showed that there was a gradual increase of the modulus and the fracture strength while using carbon black, carbon nanofiber and graphene, due to the small and strong carbon-to-carbon bonds and the length of the interlayer spacing. Moreover, regardless of the fabrication method, there was an increase in mechanical properties as the concentration of nanofillers increased until a threshold of 7 wt% was reached for the nanofiller film scaffold and 1%wt for the fibrous scaffold. Experimental results of carbon black exhibited a good agreement when compared with data obtained using numerical approaches and analytical models, especially in the case of lower carbon black fractions. Second, we examined the influence of electrical properties of nanofillers based on the concentration and the geometry of carbon nanofillers in the polymer matrix using experimental and numerical simulation approaches. The

  3. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  4. Patient-specific in vitro models for hemodynamic analysis of congenital heart disease - Additive manufacturing approach.

    PubMed

    Medero, Rafael; García-Rodríguez, Sylvana; François, Christopher J; Roldán-Alzate, Alejandro

    2017-03-21

    Non-invasive hemodynamic assessment of total cavopulmonary connection (TCPC) is challenging due to the complex anatomy. Additive manufacturing (AM) is a suitable alternative for creating patient-specific in vitro models for flow measurements using four-dimensional (4D) Flow MRI. These in vitro systems have the potential to serve as validation for computational fluid dynamics (CFD), simulating different physiological conditions. This study investigated three different AM technologies, stereolithography (SLA), selective laser sintering (SLS) and fused deposition modeling (FDM), to determine differences in hemodynamics when measuring flow using 4D Flow MRI. The models were created using patient-specific MRI data from an extracardiac TCPC. These models were connected to a perfusion pump circulating water at three different flow rates. Data was processed for visualization and quantification of velocity, flow distribution, vorticity and kinetic energy. These results were compared between each model. In addition, the flow distribution obtained in vitro was compared to in vivo. The results showed significant difference in velocities measured at the outlets of the models that required internal support material when printing. Furthermore, an ultrasound flow sensor was used to validate flow measurements at the inlets and outlets of the in vitro models. These results were highly correlated to those measured with 4D Flow MRI. This study showed that commercially available AM technologies can be used to create patient-specific vascular models for in vitro hemodynamic studies at reasonable costs. However, technologies that do not require internal supports during manufacturing allow smoother internal surfaces, which makes them better suited for flow analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  6. Reciprocal Peer Assessment as a Learning Tool for Secondary School Students in Modeling-Based Learning

    ERIC Educational Resources Information Center

    Tsivitanidou, Olia E.; Constantinou, Costas P.; Labudde, Peter; Rönnebeck, Silke; Ropohl, Mathias

    2018-01-01

    The aim of this study was to investigate how reciprocal peer assessment in modeling-based learning can serve as a learning tool for secondary school learners in a physics course. The participants were 22 upper secondary school students from a gymnasium in Switzerland. They were asked to model additive and subtractive color mixing in groups of two,…

  7. The effect of tailor-made additives on crystal growth of methyl paraben: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Cai, Zhihui; Liu, Yong; Song, Yang; Guan, Guoqiang; Jiang, Yanbin

    2017-03-01

    In this study, methyl paraben (MP) was selected as the model component, and acetaminophen (APAP), p-methyl acetanilide (PMAA) and acetanilide (ACET), which share the similar molecular structure as MP, were selected as the three tailor-made additives to study the effect of tailor-made additives on the crystal growth of MP. HPLC results indicated that the MP crystals induced by the three additives contained MP only. Photographs of the single crystals prepared indicated that the morphology of the MP crystals was greatly changed by the additives, but PXRD and single crystal diffraction results illustrated that the MP crystals were the same polymorph only with different crystal habits, and no new crystal form was found compared with other references. To investigate the effect of the additives on the crystal growth, the interaction between additives and facets was discussed in detail using the DFT methods and MD simulations. The results showed that APAP, PMAA and ACET would be selectively adsorbed on the growth surfaces of the crystal facets, which induced the change in MP crystal habits.

  8. Hydration of Portland cement with additions of calcium sulfoaluminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Saout, Gwenn, E-mail: gwenn.le-saout@mines-ales.fr; Lothenbach, Barbara; Hori, Akihiro

    2013-01-15

    The effect of mineral additions based on calcium aluminates on the hydration mechanism of ordinary Portland cement (OPC) was investigated using isothermal calorimetry, thermal analysis, X-ray diffraction, scanning electron microscopy, solid state nuclear magnetic resonance and pore solution analysis. Results show that the addition of a calcium sulfoaluminate cement (CSA) to the OPC does not affect the hydration mechanism of alite but controls the aluminate dissolution. In the second blend investigated, a rapid setting cement, the amorphous calcium aluminate reacts very fast to ettringite. The release of aluminum ions strongly retards the hydration of alite but the C-S-H has amore » similar composition as in OPC with no additional Al to Si substitution. As in CSA-OPC, the aluminate hydration is controlled by the availability of sulfates. The coupling of thermodynamic modeling with the kinetic equations predicts the amount of hydrates and pore solution compositions as a function of time and validates the model in these systems.« less

  9. Improving risk assessment of color additives in medical device polymers.

    PubMed

    Chandrasekar, Vaishnavi; Janes, Dustin W; Forrey, Christopher; Saylor, David M; Bajaj, Akhil; Duncan, Timothy V; Zheng, Jiwen; Riaz Ahmed, Kausar B; Casey, Brendan J

    2018-01-01

    Many polymeric medical device materials contain color additives which could lead to adverse health effects. The potential health risk of color additives may be assessed by comparing the amount of color additive released over time to levels deemed to be safe based on available toxicity data. We propose a conservative model for exposure that requires only the diffusion coefficient of the additive in the polymer matrix, D, to be specified. The model is applied here using a model polymer (poly(ether-block-amide), PEBAX 2533) and color additive (quinizarin blue) system. Sorption experiments performed in an aqueous dispersion of quinizarin blue (QB) into neat PEBAX yielded a diffusivity D = 4.8 × 10 -10 cm 2  s -1 , and solubility S = 0.32 wt %. On the basis of these measurements, we validated the model by comparing predictions to the leaching profile of QB from a PEBAX matrix into physiologically representative media. Toxicity data are not available to estimate a safe level of exposure to QB, as a result, we used a Threshold of Toxicological Concern (TTC) value for QB of 90 µg/adult/day. Because only 30% of the QB is released in the first day of leaching for our film thickness and calculated D, we demonstrate that a device may contain significantly more color additive than the TTC value without giving rise to a toxicological concern. The findings suggest that an initial screening-level risk assessment of color additives and other potentially toxic compounds found in device polymers can be improved. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 310-319, 2018. © 2017 Wiley Periodicals, Inc.

  10. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    PubMed

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  11. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  12. Online decision support based on modeling with the aim of increased irrigation efficiency

    NASA Astrophysics Data System (ADS)

    Dövényi-Nagy, Tamás; Bakó, Károly; Molnár, Krisztina; Rácz, Csaba; Vasvári, Gyula; Nagy, János; Dobos, Attila

    2015-04-01

    The significant changes in the structure of ownership and control of irrigation infrastructure in the past decades resultted in the decrease of total irrigable and irrigated area (Szilárd, 1999). In this paper, the development of a model-based online service is described whose aim is to aid reasonable irrigation practice and increase water use efficiency. In order to establish a scientific background for irrigation, an agrometeorological station network has been built up by the Agrometeorological and Agroecological Monitoring Centre. A website has been launched in order to provide direct access for local agricultural producers to both the measured weather parameters and results of model based calculations. The public site provides information for general use, registered partners get a handy model based toolkit for decision support at the plot level concerning irrigation, plant protection or frost forecast. The agrometeorological reference station network was established in the recent years by the Agrometeorological and Agroecological Monitoring Centre and is distributed to cover most of the irrigated cropland areas of Hungary. From the spatial aspect, the stations have been deployed mainly in Eastern Hungary with concentrated irrigation infrastructure. The meteorological stations' locations have been carefully chosen to represent their environment in terms of soil, climatic and topographic factors, thereby assuring relevant and up-to-date input data for the models. The measured parameters range from classic meteorological data (air temperature, relative humidity, solar irradiation, wind speed etc.) to specific data which are not available from other services in the region, such as soil temperature, soil water content in multiple depths and leaf wetness. In addition to the basic grid of reference stations, specific stations under irrigated conditions have been deployed to calibrate and validate the models. A specific modeling framework (MetAgro) has been developed

  13. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  14. The development, evaluation, and application of O3 flux and flux-response models for additional agricultural crops

    Treesearch

    L. D. Emberson; W. J. Massman; P. Buker; G. Soja; I. Van De Sand; G. Mills; C. Jacobs

    2006-01-01

    Currently, stomatal O3 flux and flux-response models only exist for wheat and potato (LRTAP Convention, 2004), as such there is a need to extend these models to include additional crop types. The possibility of establishing robust stomatal flux models for five agricultural crops (tomato, grapevine, sugar beet, maize and sunflower) was investigated. These crops were...

  15. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The

  16. Ionic micelles and aromatic additives: a closer look at the molecular packing parameter.

    PubMed

    Lutz-Bueno, Viviane; Isabettini, Stéphane; Walker, Franziska; Kuster, Simon; Liebi, Marianne; Fischer, Peter

    2017-08-16

    Wormlike micellar aggregates formed from the mixture of ionic surfactants with aromatic additives result in solutions with impressive viscoelastic properties. These properties are of high interest for numerous industrial applications and are often used as model systems for soft matter physics. However, robust and simple models for tailoring the viscoelastic response of the solution based on the molecular structure of the employed additive are required to fully exploit the potential of these systems. We address this shortcoming with a modified packing parameter based model, considering the additive-surfactant pair. The role of charge neutralization on anisotropic micellar growth was investigated with derivatives of sodium salicylate. The impact of the additives on the morphology of the micellar aggregates is explained from the molecular level to the macroscopic viscoelasticity. Changes in the micelle's volume, headgroup area and additive structure are explored to redefine the packing parameter. Uncharged additives penetrated deeper into the hydrophobic region of the micelle, whilst charged additives remained trapped in the polar region, as revealed by a combination of 1 H-NMR, SAXS and rheological measurements. A deeper penetration of the additives densified the hydrophobic core of the micelle and induced anisotropic growth by increasing the effective volume of the additive-surfactant pair. This phenomenon largely influenced the viscosity of the solutions. Partially penetrating additives reduced the electrostatic repulsions between surfactant headgroups and neighboring micelles. The resulting increased network density governed the elasticity of the solutions. Considering a packing parameter composed of the additive-surfactant pair proved to be a facile means of engineering the viscoelastic response of surfactant solutions. The self-assembly of the wormlike micellar aggregates could be tailored to desired morphologies resulting in a specific and predictable

  17. Automated visualization of rule-based models

    PubMed Central

    Tapia, Jose-Juan; Faeder, James R.

    2017-01-01

    Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816

  18. Firm performance model in small and medium enterprises (SMEs) based on learning orientation and innovation

    NASA Astrophysics Data System (ADS)

    Lestari, E. R.; Ardianti, F. L.; Rachmawati, L.

    2018-03-01

    This study investigated the relationship between learning orientation, innovation, and firm performance. A conceptual model and hypothesis were empirically examined using structural equation modelling. The study involved a questionnaire-based survey of owners of small and medium enterprises (SMEs) operating in Batu City, Indonesia. The results showed that both variables of learning orientation and innovation effect positively on firm performance. Additionally, learning orientation has positive effect innovation. This study has implication for SMEs aiming at increasing their firm performance based on learning orientation and innovation capability.

  19. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  20. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  1. Niche modelling of marsh plants based on occurrence and abundance data.

    PubMed

    Lou, Yanjing; Gao, Chuanyu; Pan, Yanwen; Xue, Zhenshan; Liu, Ying; Tang, Zhanhui; Jiang, Ming; Lu, Xianguo; Rydin, Håkan

    2018-03-01

    The information of species' response (optimum or critical limits along environmental gradients) is a key to understanding ecological questions and to design management plans. A large number of plots (762) from 70 transects of 13 wetland sites in Northeast China were sampled along flooding gradient from marsh to wet meadow. Species response (abundance and occurrence) to flooding were modelled with Generalized Additive Models for 21 dominant plant species. We found that 20 of 21 species showed a significant response to flooding for the occurrence and abundance models, and four types of response were found: monotonically increasing, monotonically decreasing, skewed unimodal and symmetric unimodal. The species with monotonically increasing response have the deepest flooding optimum and widest niche width, followed by those with unimodal curve, and the monotonically decreasing ones have the smallest values. The optima and niche width (whether based on occurrence or abundance models) both significantly correlated with the frequency, but not with mean abundance. Abundance models outperformed occurrence models based on goodness of fit. The abundance models predicted a rather sharp shift from dominance of helophytes (Carex pseudo-curaica and C. lasiocarpa) to wet meadow species (Calamagrostis angustifolia and Carex appendiculata) if water levels drop from about 10cm above soil surface to below the surface. The defined optima and niche width based on the abundance models can be applied to better instruct restoration management. Given the time required to collect abundance data, an efficient strategy could be to monitor occurrence in many plots and abundance in a subset of these. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  3. A Micro-Level Data-Calibrated Agent-Based Model: The Synergy between Microsimulation and Agent-Based Modeling.

    PubMed

    Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee

    2018-01-01

    Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.

  4. Implementation of the Realized Genomic Relationship Matrix to Open-Pollinated White Spruce Family Testing for Disentangling Additive from Nonadditive Genetic Effects

    PubMed Central

    Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.

    2016-01-01

    The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647

  5. Knowledge-Based Environmental Context Modeling

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  6. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    PubMed

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  8. Micromechanics based phenomenological damage modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muju, S.; Anderson, P.M.; Popelar, C.H.

    A model is developed for the study of process zone effects on dominant cracks. The model proposed here is intended to bridge the gap between the micromechanics based and the phenomenological models for the class of problems involving microcracking, transforming inclusions etc. It is based on representation of localized eigenstrains using dislocation dipoles. The eigenstrain (fitting strain) is represented as the strength (Burgers vector) of the dipole which obeys a certain phenomenological constitutive relation.

  9. Design and modeling of an additive manufactured thin shell for x-ray astronomy

    NASA Astrophysics Data System (ADS)

    Feldman, Charlotte; Atkins, Carolyn; Brooks, David; Watson, Stephen; Cochrane, William; Roulet, Melanie; Willingale, Richard; Doel, Peter

    2017-09-01

    Future X-ray astronomy missions require light-weight thin shells to provide large collecting areas within the weight limits of launch vehicles, whilst still delivering angular resolutions close to that of Chandra (0.5 arc seconds). Additive manufacturing (AM), also known as 3D printing, is a well-established technology with the ability to construct or `print' intricate support structures, which can be both integral and light-weight, and is therefore a candidate technique for producing shells for space-based X-ray telescopes. The work described here is a feasibility study into this technology for precision X-ray optics for astronomy and has been sponsored by the UK Space Agency's National Space Technology Programme. The goal of the project is to use a series of test samples to trial different materials and processes with the aim of developing a viable path for the production of an X-ray reflecting prototype for astronomical applications. The initial design of an AM prototype X-ray shell is presented with ray-trace modelling and analysis of the X-ray performance. The polishing process may cause print-through from the light-weight support structure on to the reflecting surface. Investigations in to the effect of the print-through on the X-ray performance of the shell are also presented.

  10. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  11. Estimating interaction on an additive scale between continuous determinants in a logistic regression model.

    PubMed

    Knol, Mirjam J; van der Tweel, Ingeborg; Grobbee, Diederick E; Numans, Mattijs E; Geerlings, Mirjam I

    2007-10-01

    To determine the presence of interaction in epidemiologic research, typically a product term is added to the regression model. In linear regression, the regression coefficient of the product term reflects interaction as departure from additivity. However, in logistic regression it refers to interaction as departure from multiplicativity. Rothman has argued that interaction estimated as departure from additivity better reflects biologic interaction. So far, literature on estimating interaction on an additive scale using logistic regression only focused on dichotomous determinants. The objective of the present study was to provide the methods to estimate interaction between continuous determinants and to illustrate these methods with a clinical example. and results From the existing literature we derived the formulas to quantify interaction as departure from additivity between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. Bootstrapping was used to calculate the corresponding confidence intervals. To illustrate the theory with an empirical example, data from the Utrecht Health Project were used, with age and body mass index as risk factors for elevated diastolic blood pressure. The methods and formulas presented in this article are intended to assist epidemiologists to calculate interaction on an additive scale between two variables on a certain outcome. The proposed methods are included in a spreadsheet which is freely available at: http://www.juliuscenter.nl/additive-interaction.xls.

  12. Entropic elasticity based coarse-grained model of lipid membranes

    NASA Astrophysics Data System (ADS)

    Feng, Shuo; Hu, Yucai; Liang, Haiyi

    2018-04-01

    Various models for lipid bilayer membranes have been presented to investigate their morphologies. Among them, the aggressive coarse-grained models, where the membrane is represented by a single layer of particles, are computationally efficient and of practical importance for simulating membrane dynamics at the microscopic scale. In these models, soft potentials between particle pairs are used to maintain the fluidity of membranes, but the underlying mechanism of the softening requires further clarification. We have analyzed the membrane area decrease due to thermal fluctuations, and the results demonstrate that the intraparticle part of entropic elasticity is responsible for the softening of the potential. Based on the stretching response of the membrane, a bottom-up model is developed with an entropic effect explicitly involved. The model reproduces several essential properties of the lipid membrane, including the fluid state and a plateau in the stretching curve. In addition, the area compressibility modulus, bending rigidity, and spontaneous curvature display linear dependence on model parameters. As a demonstration, we have investigated the closure and morphology evolution of membrane systems driven by spontaneous curvature, and vesicle shapes observed experimentally are faithfully reproduced.

  13. Empirical validation of an agent-based model of wood markets in Switzerland

    PubMed Central

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  14. Mechanical properties of multifunctional structure with viscoelastic components based on FVE model

    NASA Astrophysics Data System (ADS)

    Hao, Dong; Zhang, Lin; Yu, Jing; Mao, Daiyong

    2018-02-01

    Based on the models of Lion and Kardelky (2004) and Hofer and Lion (2009), a finite viscoelastic (FVE) constitutive model, considering the predeformation-, frequency- and amplitude-dependent properties, has been proposed in our earlier paper [1]. FVE model is applied to investigating the dynamic characteristics of the multifunctional structure with the viscoelastic components. Combing FVE model with the finite element theory, the dynamic model of the multifunctional structure could be obtained. Additionally, the parametric identification and the experimental verification are also given via the frequency-sweep tests. The results show that the computational data agree well with the experimental data. FVE model has made a success of expressing the dynamic characteristics of the viscoelastic materials utilized in the multifunctional structure. The multifunctional structure technology has been verified by in-orbit experiments.

  15. Biobased lubricant additives

    USDA-ARS?s Scientific Manuscript database

    Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...

  16. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  17. [Development method of healthcare information system integration based on business collaboration model].

    PubMed

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  18. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species

  19. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  20. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-12-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  1. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-02-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  2. SAR image filtering based on the heavy-tailed Rayleigh model.

    PubMed

    Achim, Alin; Kuruoğlu, Ercan E; Zerubia, Josiane

    2006-09-01

    Synthetic aperture radar (SAR) images are inherently affected by a signal dependent noise known as speckle, which is due to the radar wave coherence. In this paper, we propose a novel adaptive despeckling filter and derive a maximum a posteriori (MAP) estimator for the radar cross section (RCS). We first employ a logarithmic transformation to change the multiplicative speckle into additive noise. We model the RCS using the recently introduced heavy-tailed Rayleigh density function, which was derived based on the assumption that the real and imaginary parts of the received complex signal are best described using the alpha-stable family of distribution. We estimate model parameters from noisy observations by means of second-kind statistics theory, which relies on the Mellin transform. Finally, we compare the proposed algorithm with several classical speckle filters applied on actual SAR images. Experimental results show that the homomorphic MAP filter based on the heavy-tailed Rayleigh prior for the RCS is among the best for speckle removal.

  3. Ionic polymer-metal composite torsional sensor: physics-based modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Aidi Sharif, Montassar; Lei, Hong; Khalid Al-Rubaiai, Mohammed; Tan, Xiaobo

    2018-07-01

    Ionic polymer-metal composites (IPMCs) have intrinsic sensing and actuation properties. Typical IPMC sensors are in the shape of beams and only respond to stimuli acting along beam-bending directions. Rod or tube-shaped IPMCs have been explored as omnidirectional bending actuators or sensors. In this paper, physics-based modeling is studied for a tubular IPMC sensor under pure torsional stimulus. The Poisson–Nernst–Planck model is used to describe the fundamental physics within the IPMC, where it is hypothesized that the anion concentration is coupled to the sum of shear strains induced by the torsional stimulus. Finite element simulation is conducted to solve for the torsional sensing response, where some of the key parameters are identified based on experimental measurements using an artificial neural network. Additional experimental results suggest that the proposed model is able to capture the torsional sensing dynamics for different amplitudes and rates of the torsional stimulus.

  4. Health behavior change in advance care planning: an agent-based model.

    PubMed

    Ernecoff, Natalie C; Keane, Christopher R; Albert, Steven M

    2016-02-29

    A practical and ethical challenge in advance care planning research is controlling and intervening on human behavior. Additionally, observing dynamic changes in advance care planning (ACP) behavior proves difficult, though tracking changes over time is important for intervention development. Agent-based modeling (ABM) allows researchers to integrate complex behavioral data about advance care planning behaviors and thought processes into a controlled environment that is more easily alterable and observable. Literature to date has not addressed how best to motivate individuals, increase facilitators and reduce barriers associated with ACP. We aimed to build an ABM that applies the Transtheoretical Model of behavior change to ACP as a health behavior and accurately reflects: 1) the rates at which individuals complete the process, 2) how individuals respond to barriers, facilitators, and behavioral variables, and 3) the interactions between these variables. We developed a dynamic ABM of the ACP decision making process based on the stages of change posited by the Transtheoretical Model. We integrated barriers, facilitators, and other behavioral variables that agents encounter as they move through the process. We successfully incorporated ACP barriers, facilitators, and other behavioral variables into our ABM, forming a plausible representation of ACP behavior and decision-making. The resulting distributions across the stages of change replicated those found in the literature, with approximately half of participants in the action-maintenance stage in both the model and the literature. Our ABM is a useful method for representing dynamic social and experiential influences on the ACP decision making process. This model suggests structural interventions, e.g. increasing access to ACP materials in primary care clinics, in addition to improved methods of data collection for behavioral studies, e.g. incorporating longitudinal data to capture behavioral dynamics.

  5. Learning-based stochastic object models for use in optimizing imaging systems

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.

  6. Cloud-Based Automated Design and Additive Manufacturing: A Usage Data-Enabled Paradigm Shift

    PubMed Central

    Lehmhus, Dirk; Wuest, Thorsten; Wellsandt, Stefan; Bosse, Stefan; Kaihara, Toshiya; Thoben, Klaus-Dieter; Busse, Matthias

    2015-01-01

    Integration of sensors into various kinds of products and machines provides access to in-depth usage information as basis for product optimization. Presently, this large potential for more user-friendly and efficient products is not being realized because (a) sensor integration and thus usage information is not available on a large scale and (b) product optimization requires considerable efforts in terms of manpower and adaptation of production equipment. However, with the advent of cloud-based services and highly flexible additive manufacturing techniques, these obstacles are currently crumbling away at rapid pace. The present study explores the state of the art in gathering and evaluating product usage and life cycle data, additive manufacturing and sensor integration, automated design and cloud-based services in manufacturing. By joining and extrapolating development trends in these areas, it delimits the foundations of a manufacturing concept that will allow continuous and economically viable product optimization on a general, user group or individual user level. This projection is checked against three different application scenarios, each of which stresses different aspects of the underlying holistic concept. The following discussion identifies critical issues and research needs by adopting the relevant stakeholder perspectives. PMID:26703606

  7. Cloud-Based Automated Design and Additive Manufacturing: A Usage Data-Enabled Paradigm Shift.

    PubMed

    Lehmhus, Dirk; Wuest, Thorsten; Wellsandt, Stefan; Bosse, Stefan; Kaihara, Toshiya; Thoben, Klaus-Dieter; Busse, Matthias

    2015-12-19

    Integration of sensors into various kinds of products and machines provides access to in-depth usage information as basis for product optimization. Presently, this large potential for more user-friendly and efficient products is not being realized because (a) sensor integration and thus usage information is not available on a large scale and (b) product optimization requires considerable efforts in terms of manpower and adaptation of production equipment. However, with the advent of cloud-based services and highly flexible additive manufacturing techniques, these obstacles are currently crumbling away at rapid pace. The present study explores the state of the art in gathering and evaluating product usage and life cycle data, additive manufacturing and sensor integration, automated design and cloud-based services in manufacturing. By joining and extrapolating development trends in these areas, it delimits the foundations of a manufacturing concept that will allow continuous and economically viable product optimization on a general, user group or individual user level. This projection is checked against three different application scenarios, each of which stresses different aspects of the underlying holistic concept. The following discussion identifies critical issues and research needs by adopting the relevant stakeholder perspectives.

  8. Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.

    PubMed

    Yu, Zhiwen; Liu, Jiming; Wang, Xiaowei; Zhu, Xianjun; Wang, Daxing; Han, Guoqiang

    2016-01-01

    To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered) model, named the hybrid SEIR-V model (HSEIR-V), which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk) for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.

  9. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  10. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD

  11. Additive effect of mesenchymal stem cells and defibrotide in an arterial rat thrombosis model.

    PubMed

    Dilli, Dilek; Kılıç, Emine; Yumuşak, Nihat; Beken, Serdar; Uçkan Çetinkaya, Duygu; Karabulut, Ramazan; Zenciroğlu, Ayşegu L

    2017-06-01

    In this study, we aimed to investigate the additive effect of mesenchymal stem cells (MSC) and defibrotide (DFT) in a rat model of femoral arterial thrombosis. Thirty Sprague Dawley rats were included. An arterial thrombosis model by ferric chloride (FeCl3) was developed in the left femoral artery. The rats were equally assigned to 5 groups: Group 1-Sham-operated (without arterial injury); Group 2-Phosphate buffered saline (PBS) injected; Group 3-MSC; Group 4-DFT; Group 5-MSC + DFT. All had two intraperitoneal injections of 0.5 ml: the 1st injection was 4 h after the procedure and the 2nd one 48 h after the 1st injection. The rats were sacrificed 7 days after the 2nd injection. Although the use of human bone marrow-derived (hBM) hBM-MSC or DFT alone enabled partial resolution of the thrombus, combining them resulted in near-complete resolution. Neovascularization was two-fold better in hBM-MSC + DFT treated rats (11.6 ± 2.4 channels) compared with the hBM-MSC (3.8 ± 2.7 channels) and DFT groups (5.5 ± 1.8 channels) (P < 0.0001 and P= 0.002, respectively). The combined use of hBM-MSC and DFT in a rat model of arterial thrombosis showed additive effect resulting in near-complete resolution of the thrombus.

  12. Culturicon model: A new model for cultural-based emoticon

    NASA Astrophysics Data System (ADS)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  13. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  14. Knowledge-based modelling of historical surfaces using lidar data

    NASA Astrophysics Data System (ADS)

    Höfler, Veit; Wessollek, Christine; Karrasch, Pierre

    2016-10-01

    Currently in archaeological studies digital elevation models are mainly used especially in terms of shaded reliefs for the prospection of archaeological sites. Hesse (2010) provides a supporting software tool for the determination of local relief models during the prospection using LiDAR scans. Furthermore the search for relicts from WW2 is also in the focus of his research. In James et al. (2006) the determined contour lines were used to reconstruct locations of archaeological artefacts such as buildings. This study is much more and presents an innovative workflow of determining historical high resolution terrain surfaces using recent high resolution terrain models and sedimentological expert knowledge. Based on archaeological field studies (Franconian Saale near Bad Neustadt in Germany) the sedimentological analyses shows that archaeological interesting horizon and geomorphological expert knowledge in combination with particle size analyses (Koehn, DIN ISO 11277) are useful components for reconstructing surfaces of the early Middle Ages. Furthermore the paper traces how it is possible to use additional information (extracted from a recent digital terrain model) to support the process of determination historical surfaces. Conceptual this research is based on methodology of geomorphometry and geo-statistics. The basic idea is that the working procedure is based on the different input data. One aims at tracking the quantitative data and the other aims at processing the qualitative data. Thus, the first quantitative data were available for further processing, which were later processed with the qualitative data to convert them to historical heights. In the final stage of the workflow all gathered information are stored in a large data matrix for spatial interpolation using the geostatistical method of Kriging. Besides the historical surface, the algorithm also provides a first estimation of accuracy of the modelling. The presented workflow is characterized by a high

  15. Optimal Chemotherapy for Leukemia: A Model-Based Strategy for Individualized Treatment

    PubMed Central

    Jayachandran, Devaraj; Rundell, Ann E.; Hannemann, Robert E.; Vik, Terry A.; Ramkrishna, Doraiswami

    2014-01-01

    Acute Lymphoblastic Leukemia, commonly known as ALL, is a predominant form of cancer during childhood. With the advent of modern healthcare support, the 5-year survival rate has been impressive in the recent past. However, long-term ALL survivors embattle several treatment-related medical and socio-economic complications due to excessive and inordinate chemotherapy doses received during treatment. In this work, we present a model-based approach to personalize 6-Mercaptopurine (6-MP) treatment for childhood ALL with a provision for incorporating the pharmacogenomic variations among patients. Semi-mechanistic mathematical models were developed and validated for i) 6-MP metabolism, ii) red blood cell mean corpuscular volume (MCV) dynamics, a surrogate marker for treatment efficacy, and iii) leukopenia, a major side-effect. With the constraint of getting limited data from clinics, a global sensitivity analysis based model reduction technique was employed to reduce the parameter space arising from semi-mechanistic models. The reduced, sensitive parameters were used to individualize the average patient model to a specific patient so as to minimize the model uncertainty. Models fit the data well and mimic diverse behavior observed among patients with minimum parameters. The model was validated with real patient data obtained from literature and Riley Hospital for Children in Indianapolis. Patient models were used to optimize the dose for an individual patient through nonlinear model predictive control. The implementation of our approach in clinical practice is realizable with routinely measured complete blood counts (CBC) and a few additional metabolite measurements. The proposed approach promises to achieve model-based individualized treatment to a specific patient, as opposed to a standard-dose-for-all, and to prescribe an optimal dose for a desired outcome with minimum side-effects. PMID:25310465

  16. Evaluation of soy-based surface active copolymers as surfactant ingredients in model shampoo formulations.

    PubMed

    Popadyuk, A; Kalita, H; Chisholm, B J; Voronov, A

    2014-12-01

    A new non-toxic soybean oil-based polymeric surfactant (SBPS) for personal-care products was developed and extensively characterized, including an evaluation of the polymeric surfactant performance in model shampoo formulations. To experimentally assure applicability of the soy-based macromolecules in shampoos, either in combination with common anionic surfactants (in this study, sodium lauryl sulfate, SLS) or as a single surface-active ingredient, the testing of SBPS physicochemical properties, performance and visual assessment of SBPS-based model shampoos was carried out. The results obtained, including foaming and cleaning ability of model formulations, were compared to those with only SLS as a surfactant as well as to SLS-free shampoos. Overall, the results show that the presence of SBPS improves cleaning, foaming, and conditioning of model formulations. SBPS-based formulations meet major requirements of multifunctional shampoos - mild detergency, foaming, good conditioning, and aesthetic appeal, which are comparable to commercially available shampoos. In addition, examination of SBPS/SLS mixtures in model shampoos showed that the presence of the SBPS enables the concentration of SLS to be significantly reduced without sacrificing shampoo performance. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  17. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  18. Mechanical characterization of structurally porous biomaterials built via additive manufacturing: experiments, predictive models, and design maps for load-bearing bone replacement implants.

    PubMed

    Melancon, D; Bagheri, Z S; Johnston, R B; Liu, L; Tanzer, M; Pasini, D

    2017-11-01

    Porous biomaterials can be additively manufactured with micro-architecture tailored to satisfy the stringent mechano-biological requirements imposed by bone replacement implants. In a previous investigation, we introduced structurally porous biomaterials, featuring strength five times stronger than commercially available porous materials, and confirmed their bone ingrowth capability in an in vivo canine model. While encouraging, the manufactured biomaterials showed geometric mismatches between their internal porous architecture and that of its as-designed counterpart, as well as discrepancies between predicted and tested mechanical properties, issues not fully elucidated. In this work, we propose a systematic approach integrating computed tomography, mechanical testing, and statistical analysis of geometric imperfections to generate statistical based numerical models of high-strength additively manufactured porous biomaterials. The method is used to develop morphology and mechanical maps that illustrate the role played by pore size, porosity, strut thickness, and topology on the relations governing their elastic modulus and compressive yield strength. Overall, there are mismatches between the mechanical properties of ideal-geometry models and as-manufactured porous biomaterials with average errors of 49% and 41% respectively for compressive elastic modulus and yield strength. The proposed methodology gives more accurate predictions for the compressive stiffness and the compressive strength properties with a reduction of the average error to 11% and 7.6%. The implications of the results and the methodology here introduced are discussed in the relevant biomechanical and clinical context, with insight that highlights promises and limitations of additively manufactured porous biomaterials for load-bearing bone replacement implants. In this work, we perform mechanical characterization of load-bearing porous biomaterials for bone replacement over their entire design

  19. Model-based redesign of global transcription regulation

    PubMed Central

    Carrera, Javier; Rodrigo, Guillermo; Jaramillo, Alfonso

    2009-01-01

    Synthetic biology aims to the design or redesign of biological systems. In particular, one possible goal could be the rewiring of the transcription regulation network by exchanging the endogenous promoters. To achieve this objective, we have adapted current methods to the inference of a model based on ordinary differential equations that is able to predict the network response after a major change in its topology. Our procedure utilizes microarray data for training. We have experimentally validated our inferred global regulatory model in Escherichia coli by predicting transcriptomic profiles under new perturbations. We have also tested our methodology in silico by providing accurate predictions of the underlying networks from expression data generated with artificial genomes. In addition, we have shown the predictive power of our methodology by obtaining the gene profile in experimental redesigns of the E. coli genome, where rewiring the transcriptional network by means of knockouts of master regulators or by upregulating transcription factors controlled by different promoters. Our approach is compatible with most network inference methods, allowing to explore computationally future genome-wide redesign experiments in synthetic biology. PMID:19188257

  20. Multipole-Based Cable Braid Electromagnetic Penetration Model: Electric Penetration Case

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Langston, William L.; ...

    2017-07-11

    In this paper, we investigate the electric penetration case of the first principles multipole-based cable braid electromagnetic penetration model reported in the Progress in Electromagnetics Research B 66, 63–89 (2016). We first analyze the case of a 1-D array of wires: this is a problem which is interesting on its own, and we report its modeling based on a multipole-conformal mapping expansion and extension by means of Laplace solutions in bipolar coordinates. We then compare the elastance (inverse of capacitance) results from our first principles cable braid electromagnetic penetration model to that obtained using the multipole-conformal mapping bipolar solution. Thesemore » results are found in a good agreement up to a radius to half spacing ratio of 0.6, demonstrating a robustness needed for many commercial cables. We then analyze realistic cable implementations without dielectrics and compare the results from our first principles braid electromagnetic penetration model to the semiempirical results reported by Kley in the IEEE Transactions on Electromagnetic Compatibility 35, 1–9 (1993). Finally, although we find results on the same order of magnitude of Kley's results, the full dependence on the actual cable geometry is accounted for only in our proposed multipole model which, in addition, enables us to treat perturbations from those commercial cables measured.« less