Sample records for generate predictive models

  1. Developing models for the prediction of hospital healthcare waste generation rate.

    PubMed

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.

  2. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  3. An empirical model for prediction of household solid waste generation rate - A case study of Dhanbad, India.

    PubMed

    Kumar, Atul; Samadder, S R

    2017-10-01

    Accurate prediction of the quantity of household solid waste generation is very much essential for effective management of municipal solid waste (MSW). In actual practice, modelling methods are often found useful for precise prediction of MSW generation rate. In this study, two models have been proposed that established the relationships between the household solid waste generation rate and the socioeconomic parameters, such as household size, total family income, education, occupation and fuel used in the kitchen. Multiple linear regression technique was applied to develop the two models, one for the prediction of biodegradable MSW generation rate and the other for non-biodegradable MSW generation rate for individual households of the city Dhanbad, India. The results of the two models showed that the coefficient of determinations (R 2 ) were 0.782 for biodegradable waste generation rate and 0.676 for non-biodegradable waste generation rate using the selected independent variables. The accuracy tests of the developed models showed convincing results, as the predicted values were very close to the observed values. Validation of the developed models with a new set of data indicated a good fit for actual prediction purpose with predicted R 2 values of 0.76 and 0.64 for biodegradable and non-biodegradable MSW generation rate respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Gray correlation analysis and prediction models of living refuse generation in Shanghai city.

    PubMed

    Liu, Gousheng; Yu, Jianguo

    2007-01-01

    A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.

  5. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  6. Mathematical modeling to predict residential solid waste generation.

    PubMed

    Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de

    2008-01-01

    One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.

  7. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  8. Technical note: A linear model for predicting δ13 Cprotein.

    PubMed

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  9. Verifying the performance of artificial neural network and multiple linear regression in predicting the mean seasonal municipal solid waste generation rate: A case study of Fars province, Iran.

    PubMed

    Azadi, Sama; Karimi-Jashni, Ayoub

    2016-02-01

    Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  11. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2015-09-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data; and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than constrained multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which "ageing" reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least three times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these "hybrid" multi-generational schemes should be used with great caution in regional models.

  12. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model, resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low-volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which ageing reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least 3 times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these hybrid multi-generational schemes should be used with great caution in regional models.

  13. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  14. Model and Scenario Variations in Predicted Number of Generations of Spodoptera litura Fab. on Peanut during Future Climate Change Scenario

    PubMed Central

    Srinivasa Rao, Mathukumalli; Swathi, Pettem; Rama Rao, Chitiprolu Anantha; Rao, K. V.; Raju, B. M. K.; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka

    2015-01-01

    The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)—2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1–2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18–22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods. PMID:25671564

  15. Numerical modeling of particle generation from ozone reactions with human-worn clothing in indoor environments

    NASA Astrophysics Data System (ADS)

    Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan

    2015-02-01

    Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.

  16. Effects of number of training generations on genomic prediction for various traits in a layer chicken population.

    PubMed

    Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J

    2016-03-19

    Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions. The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.

  17. Literature mining supports a next-generation modeling approach to predict cellular byproduct secretion.

    PubMed

    King, Zachary A; O'Brien, Edward J; Feist, Adam M; Palsson, Bernhard O

    2017-01-01

    The metabolic byproducts secreted by growing cells can be easily measured and provide a window into the state of a cell; they have been essential to the development of microbiology, cancer biology, and biotechnology. Progress in computational modeling of cells has made it possible to predict metabolic byproduct secretion with bottom-up reconstructions of metabolic networks. However, owing to a lack of data, it has not been possible to validate these predictions across a wide range of strains and conditions. Through literature mining, we were able to generate a database of Escherichia coli strains and their experimentally measured byproduct secretions. We simulated these strains in six historical genome-scale models of E. coli, and we report that the predictive power of the models has increased as they have expanded in size and scope. The latest genome-scale model of metabolism correctly predicts byproduct secretion for 35/89 (39%) of designs. The next-generation genome-scale model of metabolism and gene expression (ME-model) correctly predicts byproduct secretion for 40/89 (45%) of designs, and we show that ME-model predictions could be further improved through kinetic parameterization. We analyze the failure modes of these simulations and discuss opportunities to improve prediction of byproduct secretion. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  18. Virtual reality and consciousness inference in dreaming

    PubMed Central

    Hobson, J. Allan; Hong, Charles C.-H.; Friston, Karl J.

    2014-01-01

    This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that – through experience-dependent plasticity – becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep – and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain’s generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis – evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research. PMID:25346710

  19. Virtual reality and consciousness inference in dreaming.

    PubMed

    Hobson, J Allan; Hong, Charles C-H; Friston, Karl J

    2014-01-01

    This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.

  20. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  1. Sociality influences cultural complexity.

    PubMed

    Muthukrishna, Michael; Shulman, Ben W; Vasilescu, Vlad; Henrich, Joseph

    2014-01-07

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution.

  2. Sociality influences cultural complexity

    PubMed Central

    Muthukrishna, Michael; Shulman, Ben W.; Vasilescu, Vlad; Henrich, Joseph

    2014-01-01

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution. PMID:24225461

  3. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  4. Next-generation genome-scale models for metabolic engineering.

    PubMed

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Influence of thermodynamic properties of a thermo-acoustic emitter on the efficiency of thermal airborne ultrasound generation.

    PubMed

    Daschewski, M; Kreutzbruck, M; Prager, J

    2015-12-01

    In this work we experimentally verify the theoretical prediction of the recently published Energy Density Fluctuation Model (EDF-model) of thermo-acoustic sound generation. Particularly, we investigate experimentally the influence of thermal inertia of an electrically conductive film on the efficiency of thermal airborne ultrasound generation predicted by the EDF-model. Unlike widely used theories, the EDF-model predicts that the thermal inertia of the electrically conductive film is a frequency-dependent parameter. Its influence grows non-linearly with the increase of excitation frequency and reduces the efficiency of the ultrasound generation. Thus, this parameter is the major limiting factor for the efficient thermal airborne ultrasound generation in the MHz-range. To verify this theoretical prediction experimentally, five thermo-acoustic emitter samples consisting of Indium-Tin-Oxide (ITO) coatings of different thicknesses (from 65 nm to 1.44 μm) on quartz glass substrates were tested for airborne ultrasound generation in a frequency range from 10 kHz to 800 kHz. For the measurement of thermally generated sound pressures a laser Doppler vibrometer combined with a 12 μm thin polyethylene foil was used as the sound pressure detector. All tested thermo-acoustic emitter samples showed a resonance-free frequency response in the entire tested frequency range. The thermal inertia of the heat producing film acts as a low-pass filter and reduces the generated sound pressure with the increasing excitation frequency and the ITO film thickness. The difference of generated sound pressure levels for samples with 65 nm and 1.44 μm thickness is in the order of about 6 dB at 50 kHz and of about 12 dB at 500 kHz. A comparison of sound pressure levels measured experimentally and those predicted by the EDF-model shows for all tested emitter samples a relative error of less than ±6%. Thus, experimental results confirm the prediction of the EDF-model and show that the model can be applied for design and optimization of thermo-acoustic airborne ultrasound emitters. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. CLIGEN: Addressing deficiencies in the generator and its databases

    USDA-ARS?s Scientific Manuscript database

    CLIGEN is a stochastic generator that estimates daily temperatures, precipitation and other weather related phenomena. It is an intermediate model used by the Water Erosion Prediction Program (WEPP), the Wind Erosion Prediction System (WEPS), and other models that require daily weather observations....

  7. Performance of genomic prediction within and across generations in maritime pine.

    PubMed

    Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent

    2016-08-11

    Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.

  8. RS-predictor: a new tool for predicting sites of cytochrome P450-mediated metabolism applied to CYP 3A4.

    PubMed

    Zaretzki, Jed; Bergeron, Charles; Rydberg, Patrik; Huang, Tao-wei; Bennett, Kristin P; Breneman, Curt M

    2011-07-25

    This article describes RegioSelectivity-Predictor (RS-Predictor), a new in silico method for generating predictive models of P450-mediated metabolism for drug-like compounds. Within this method, potential sites of metabolism (SOMs) are represented as "metabolophores": A concept that describes the hierarchical combination of topological and quantum chemical descriptors needed to represent the reactivity of potential metabolic reaction sites. RS-Predictor modeling involves the use of metabolophore descriptors together with multiple-instance ranking (MIRank) to generate an optimized descriptor weight vector that encodes regioselectivity trends across all cases in a training set. The resulting pathway-independent (O-dealkylation vs N-oxidation vs Csp(3) hydroxylation, etc.), isozyme-specific regioselectivity model may be used to predict potential metabolic liabilities. In the present work, cross-validated RS-Predictor models were generated for a set of 394 substrates of CYP 3A4 as a proof-of-principle for the method. Rank aggregation was then employed to merge independently generated predictions for each substrate into a single consensus prediction. The resulting consensus RS-Predictor models were shown to reliably identify at least one observed site of metabolism in the top two rank-positions on 78% of the substrates. Comparisons between RS-Predictor and previously described regioselectivity prediction methods reveal new insights into how in silico metabolite prediction methods should be compared.

  9. Genomic prediction in a nuclear population of layers using single-step models.

    PubMed

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  10. EOID System Model Validation, Metrics, and Synthetic Clutter Generation

    DTIC Science & Technology

    2003-09-30

    Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The models will predict the impact of

  11. Why is past depression the best predictor of future depression? Stress generation as a mechanism of depression continuity in girls.

    PubMed

    Rudolph, Karen D; Flynn, Megan; Abaied, Jamie L; Groot, Alison; Thompson, Renee

    2009-07-01

    This study examined whether a transactional interpersonal life stress model helps to explain the continuity in depression over time in girls. Youth (86 girls, 81 boys; M age = 12.41, SD = 1.19) and their caregivers participated in a three-wave longitudinal study. Depression and episodic life stress were assessed with semistructured interviews. Path analysis provided support for a transactional interpersonal life stress model in girls but not in boys, wherein depression predicted the generation of interpersonal stress, which predicted subsequent depression. Moreover, self-generated interpersonal stress partially accounted for the continuity of depression over time. Although depression predicted noninterpersonal stress generation in girls (but not in boys), noninterpersonal stress did not predict subsequent depression.

  12. Predictive model for CO2 generation and decay in building envelopes

    NASA Astrophysics Data System (ADS)

    Aglan, Heshmat A.

    2003-01-01

    Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.

  13. QSAR models for predicting octanol/water and organic carbon/water partition coefficients of polychlorinated biphenyls.

    PubMed

    Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J

    2016-04-01

    Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.

  14. Why is Past Depression the Best Predictor of Future Depression? Stress Generation as a Mechanism of Depression Continuity in Girls

    PubMed Central

    Rudolph, Karen D.; Flynn, Megan; Abaied, Jamie; Groot, Alison; Thompson, Renee

    2009-01-01

    This study examined whether a transactional interpersonal life stress model helps to explain the continuity in depression over time in girls. Youth (86 girls, 81 boys; M age = 12.41, SD = 1.19) and their caregivers participated in a three-wave longitudinal study. Depression and episodic life stress were assessed with semi-structured interviews. Path analysis provided support for a transactional interpersonal life stress model in girls but not in boys, wherein depression predicted the generation of interpersonal stress, which predicted subsequent depression. Moreover, self-generated interpersonal stress partially accounted for the continuity of depression over time. Although depression predicted noninterpersonal stress generation in girls (but not in boys), noninterpersonal stress did not predict subsequent depression. PMID:20183635

  15. Action perception as hypothesis testing.

    PubMed

    Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni

    2017-04-01

    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Feedbacks Between Shallow Groundwater Dynamics and Surface Topography on Runoff Generation in Flat Fields

    NASA Astrophysics Data System (ADS)

    Appels, Willemijn M.; Bogaart, Patrick W.; van der Zee, Sjoerd E. A. T. M.

    2017-12-01

    In winter, saturation excess (SE) ponding is observed regularly in temperate lowland regions. Surface runoff dynamics are controlled by small topographical features that are unaccounted for in hydrological models. To better understand storage and routing effects of small-scale topography and their interaction with shallow groundwater under SE conditions, we developed a model of reduced complexity to investigate SE runoff generation, emphasizing feedbacks between shallow groundwater dynamics and mesotopography. The dynamic specific yield affected unsaturated zone water storage, causing rapid switches between negative and positive head and a flatter groundwater mound than predicted by analytical agrohydrological models. Accordingly, saturated areas were larger and local groundwater fluxes smaller than predicted, leading to surface runoff generation. Mesotopographic features routed water over larger distances, providing a feedback mechanism that amplified changes to the shape of the groundwater mound. This in turn enhanced runoff generation, but whether it also resulted in runoff events depended on the geometry and location of the depressions. Whereas conditions favorable to runoff generation may abound during winter, these feedbacks profoundly reduce the predictability of SE runoff: statistically identical rainfall series may result in completely different runoff generation. The model results indicate that waterlogged areas in any given rainfall event are larger than those predicted by current analytical groundwater models used for drainage design. This change in the groundwater mound extent has implications for crop growth and damage assessments.

  17. Aggregation Trade Offs in Family Based Recommendations

    NASA Astrophysics Data System (ADS)

    Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac

    Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.

  18. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Forecasting municipal solid waste generation using artificial intelligence modelling approaches.

    PubMed

    Abbasi, Maryam; El Hanandeh, Ali

    2016-10-01

    Municipal solid waste (MSW) management is a major concern to local governments to protect human health, the environment and to preserve natural resources. The design and operation of an effective MSW management system requires accurate estimation of future waste generation quantities. The main objective of this study was to develop a model for accurate forecasting of MSW generation that helps waste related organizations to better design and operate effective MSW management systems. Four intelligent system algorithms including support vector machine (SVM), adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and k-nearest neighbours (kNN) were tested for their ability to predict monthly waste generation in the Logan City Council region in Queensland, Australia. Results showed artificial intelligence models have good prediction performance and could be successfully applied to establish municipal solid waste forecasting models. Using machine learning algorithms can reliably predict monthly MSW generation by training with waste generation time series. In addition, results suggest that ANFIS system produced the most accurate forecasts of the peaks while kNN was successful in predicting the monthly averages of waste quantities. Based on the results, the total annual MSW generated in Logan City will reach 9.4×10(7)kg by 2020 while the peak monthly waste will reach 9.37×10(6)kg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Simulated Annealing Based Hybrid Forecast for Improving Daily Municipal Solid Waste Generation Prediction

    PubMed Central

    Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei

    2014-01-01

    A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508

  1. Predicting network modules of cell cycle regulators using relative protein abundance statistics.

    PubMed

    Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J

    2017-02-28

    Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.

  2. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  3. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less

  4. Handling a Small Dataset Problem in Prediction Model by employ Artificial Data Generation Approach: A Review

    NASA Astrophysics Data System (ADS)

    Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd

    2017-09-01

    The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.

  5. A theoretical model of the application of RF energy to the airway wall and its experimental validation.

    PubMed

    Jarrard, Jerry; Wizeman, Bill; Brown, Robert H; Mitzner, Wayne

    2010-11-27

    Bronchial thermoplasty is a novel technique designed to reduce an airway's ability to contract by reducing the amount of airway smooth muscle through controlled heating of the airway wall. This method has been examined in animal models and as a treatment for asthma in human subjects. At the present time, there has been little research published about how radiofrequency (RF) energy and heat is transferred to the airways of the lung during bronchial thermoplasty procedures. In this manuscript we describe a computational, theoretical model of the delivery of RF energy to the airway wall. An electro-thermal finite-element-analysis model was designed to simulate the delivery of temperature controlled RF energy to airway walls of the in vivo lung. The model includes predictions of heat generation due to RF joule heating and transfer of heat within an airway wall due to thermal conduction. To implement the model, we use known physical characteristics and dimensions of the airway and lung tissues. The model predictions were tested with measurements of temperature, impedance, energy, and power in an experimental canine model. Model predictions of electrode temperature, voltage, and current, along with tissue impedance and delivered energy were compared to experiment measurements and were within ± 5% of experimental averages taken over 157 sample activations.The experimental results show remarkable agreement with the model predictions, and thus validate the use of this model to predict the heat generation and transfer within the airway wall following bronchial thermoplasty. The model also demonstrated the importance of evaporation as a loss term that affected both electrical measurements and heat distribution. The model predictions showed excellent agreement with the empirical results, and thus support using the model to develop the next generation of devices for bronchial thermoplasty. Our results suggest that comparing model results to RF generator electrical measurements may be a useful tool in the early evaluation of a model.

  6. QSPR models for predicting generator-column-derived octanol/water and octanol/air partition coefficients of polychlorinated biphenyls.

    PubMed

    Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-06-01

    Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Statistical Analysis of Complexity Generators for Cost Estimation

    NASA Technical Reports Server (NTRS)

    Rowell, Ginger Holmes

    1999-01-01

    Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.

  8. Several examples where turbulence models fail in inlet flow field analysis

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    1993-01-01

    Computational uncertainties in turbulence modeling for three dimensional inlet flow fields include flows approaching separation, strength of secondary flow field, three dimensional flow predictions of vortex liftoff, and influence of vortex-boundary layer interactions; computational uncertainties in vortex generator modeling include representation of generator vorticity field and the relationship between generator and vorticity field. The objectives of the inlet flow field studies presented in this document are to advance the understanding, prediction, and control of intake distortion and to study the basic interactions that influence this design problem.

  9. The Role of Visuospatial Resources in Generating Predictive and Bridging Inferences

    ERIC Educational Resources Information Center

    Fincher-Kiefer, Rebecca; D'Agostino, Paul R.

    2004-01-01

    It has been suggested that predictive and bridging inferences are generated at different levels of text representation: predictive inferences at a reader's situation model and bridging inferences at a reader's propositional textbase (Fincher-Kiefer, 1993, 1996; McDaniel, Schmalhofer, & Keefe, 2001; Schmalhofer, McDaniel, & Keefe, 2002). Recently,…

  10. An epidemiological modeling and data integration framework.

    PubMed

    Pfeifer, B; Wurz, M; Hanser, F; Seger, M; Netzer, M; Osl, M; Modre-Osprian, R; Schreier, G; Baumgartner, C

    2010-01-01

    In this work, a cellular automaton software package for simulating different infectious diseases, storing the simulation results in a data warehouse system and analyzing the obtained results to generate prediction models as well as contingency plans, is proposed. The Brisbane H3N2 flu virus, which has been spreading during the winter season 2009, was used for simulation in the federal state of Tyrol, Austria. The simulation-modeling framework consists of an underlying cellular automaton. The cellular automaton model is parameterized by known disease parameters and geographical as well as demographical conditions are included for simulating the spreading. The data generated by simulation are stored in the back room of the data warehouse using the Talend Open Studio software package, and subsequent statistical and data mining tasks are performed using the tool, termed Knowledge Discovery in Database Designer (KD3). The obtained simulation results were used for generating prediction models for all nine federal states of Austria. The proposed framework provides a powerful and easy to handle interface for parameterizing and simulating different infectious diseases in order to generate prediction models and improve contingency plans for future events.

  11. Social cognitive predictors of first- and non-first-generation college students' academic and life satisfaction.

    PubMed

    Garriott, Patton O; Hudyma, Aaron; Keene, Chesleigh; Santiago, Dana

    2015-04-01

    The present study tested Lent's (2004) social-cognitive model of normative well-being in a sample (N = 414) of first- and non-first-generation college students. A model depicting relationships between: positive affect, environmental supports, college self-efficacy, college outcome expectations, academic progress, academic satisfaction, and life satisfaction was examined using structural equation modeling. The moderating roles of perceived importance of attending college and intrinsic goal motivation were also explored. Results suggested the hypothesized model provided an adequate fit to the data while hypothesized relationships in the model were partially supported. Environmental supports predicted college self-efficacy, college outcome expectations, and academic satisfaction. Furthermore, college self-efficacy predicted academic progress while college outcome expectations predicted academic satisfaction. Academic satisfaction, but not academic progress predicted life satisfaction. The structural model explained 44% of the variance in academic progress, 56% of the variance in academic satisfaction, and 28% of the variance in life satisfaction. Mediation analyses indicated several significant indirect effects between variables in the model while moderation analyses revealed a 3-way interaction between academic satisfaction, intrinsic motivation for attending college, and first-generation college student status on life satisfaction. Results are discussed in terms of applying the normative model of well-being to promote first- and non-first-generation college students' academic and life satisfaction. (c) 2015 APA, all rights reserved).

  12. Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marre, O.; El Boustani, S.; Fregnac, Y.

    We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less

  13. The applicability of a computer model for predicting head injury incurred during actual motor vehicle collisions.

    PubMed

    Moran, Stephan G; Key, Jason S; McGwin, Gerald; Keeley, Jason W; Davidson, James S; Rue, Loring W

    2004-07-01

    Head injury is a significant cause of both morbidity and mortality. Motor vehicle collisions (MVCs) are the most common source of head injury in the United States. No studies have conclusively determined the applicability of computer models for accurate prediction of head injuries sustained in actual MVCs. This study sought to determine the applicability of such models for predicting head injuries sustained by MVC occupants. The Crash Injury Research and Engineering Network (CIREN) database was queried for restrained drivers who sustained a head injury. These collisions were modeled using occupant dynamic modeling (MADYMO) software, and head injury scores were generated. The computer-generated head injury scores then were evaluated with respect to the actual head injuries sustained by the occupants to determine the applicability of MADYMO computer modeling for predicting head injury. Five occupants meeting the selection criteria for the study were selected from the CIREN database. The head injury scores generated by MADYMO were lower than expected given the actual injuries sustained. In only one case did the computer analysis predict a head injury of a severity similar to that actually sustained by the occupant. Although computer modeling accurately simulates experimental crash tests, it may not be applicable for predicting head injury in actual MVCs. Many complicating factors surrounding actual MVCs make accurate computer modeling difficult. Future modeling efforts should consider variables such as age of the occupant and should account for a wider variety of crash scenarios.

  14. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other's Actions by Humans.

    PubMed

    Ikegami, Tsuyoshi; Ganesh, Gowrishankar

    2017-01-01

    The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants' ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert's abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert's self-estimation is explained only by considering a change in the individual's forward model, showing that an improvement in an expert's ability to predict outcomes of observed actions affects the individual's forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions.

  15. [An ADAA model and its analysis method for agronomic traits based on the double-cross mating design].

    PubMed

    Xu, Z C; Zhu, J

    2000-01-01

    According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.

  16. Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  18. The Cerebellum Generates Motor-to-Auditory Predictions: ERP Lesion Evidence

    ERIC Educational Resources Information Center

    Knolle, Franziska; Schroger, Erich; Baess, Pamela; Kotz, Sonja A.

    2012-01-01

    Forward predictions are crucial in motor action (e.g., catching a ball, or being tickled) but may also apply to sensory or cognitive processes (e.g., listening to distorted speech or to a foreign accent). According to the "internal forward model," the cerebellum generates predictions about somatosensory consequences of movements. These predictions…

  19. Solid waste forecasting using modified ANFIS modeling.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; K N A, Maulud

    2015-10-01

    Solid waste prediction is crucial for sustainable solid waste management. Usually, accurate waste generation record is challenge in developing countries which complicates the modelling process. Solid waste generation is related to demographic, economic, and social factors. However, these factors are highly varied due to population and economy growths. The objective of this research is to determine the most influencing demographic and economic factors that affect solid waste generation using systematic approach, and then develop a model to forecast solid waste generation using a modified Adaptive Neural Inference System (MANFIS). The model evaluation was performed using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (R²). The results show that the best input variables are people age groups 0-14, 15-64, and people above 65 years, and the best model structure is 3 triangular fuzzy membership functions and 27 fuzzy rules. The model has been validated using testing data and the resulted training RMSE, MAE and R² were 0.2678, 0.045 and 0.99, respectively, while for testing phase RMSE =3.986, MAE = 0.673 and R² = 0.98. To date, a few attempts have been made to predict the annual solid waste generation in developing countries. This paper presents modeling of annual solid waste generation using Modified ANFIS, it is a systematic approach to search for the most influencing factors and then modify the ANFIS structure to simplify the model. The proposed method can be used to forecast the waste generation in such developing countries where accurate reliable data is not always available. Moreover, annual solid waste prediction is essential for sustainable planning.

  20. Product component genealogy modeling and field-failure prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Caleb; Hong, Yili; Meeker, William Q.

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  1. Product component genealogy modeling and field-failure prediction

    DOE PAGES

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  2. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    PubMed

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  3. Genetic Programming as Alternative for Predicting Development Effort of Individual Software Projects

    PubMed Central

    Chavoya, Arturo; Lopez-Martin, Cuauhtemoc; Andalon-Garcia, Irma R.; Meda-Campaña, M. E.

    2012-01-01

    Statistical and genetic programming techniques have been used to predict the software development effort of large software projects. In this paper, a genetic programming model was used for predicting the effort required in individually developed projects. Accuracy obtained from a genetic programming model was compared against one generated from the application of a statistical regression model. A sample of 219 projects developed by 71 practitioners was used for generating the two models, whereas another sample of 130 projects developed by 38 practitioners was used for validating them. The models used two kinds of lines of code as well as programming language experience as independent variables. Accuracy results from the model obtained with genetic programming suggest that it could be used to predict the software development effort of individual projects when these projects have been developed in a disciplined manner within a development-controlled environment. PMID:23226305

  4. Predicting the velocity and azimuth of fragments generated by the range destruction or random failure of rocket casings and tankage

    NASA Technical Reports Server (NTRS)

    Eck, Marshall; Mukunda, Meera

    1988-01-01

    A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.

  5. The impacts of renewable energy policies on renewable energy sources for electricity generating capacity

    NASA Astrophysics Data System (ADS)

    Koo, Bryan Bonsuk

    Electricity generation from non-hydro renewable sources has increased rapidly in the last decade. For example, Renewable Energy Sources for Electricity (RES-E) generating capacity in the U.S. almost doubled for the last three year from 2009 to 2012. Multiple papers point out that RES-E policies implemented by state governments play a crucial role in increasing RES-E generation or capacity. This study examines the effects of state RES-E policies on state RES-E generating capacity, using a fixed effects model. The research employs panel data from the 50 states and the District of Columbia, for the period 1990 to 2011, and uses a two-stage approach to control endogeneity embedded in the policies adopted by state governments, and a Prais-Winsten estimator to fix any autocorrelation in the panel data. The analysis finds that Renewable Portfolio Standards (RPS) and Net-metering are significantly and positively associated with RES-E generating capacity, but neither Public Benefit Funds nor the Mandatory Green Power Option has a statistically significant relation to RES-E generating capacity. Results of the two-stage model are quite different from models which do not employ predicted policy variables. Analysis using non-predicted variables finds that RPS and Net-metering policy are statistically insignificant and negatively associated with RES-E generating capacity. On the other hand, Green Energy Purchasing policy is insignificant in the two-stage model, but significant in the model without predicted values.

  6. Modeling and Predicting Cancer from ToxCast Phase I Data

    EPA Science Inventory

    The ToxCast program is generating a diverse collection of in vitro cell free and cell based HTS data to be used for predictive modeling of in vivo toxicity. We are using this in vitro data, plus corresponding in vivo data from ToxRefDB, to develop models for prediction and priori...

  7. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  8. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  9. Using next generation transcriptome sequencing to predict an ectomycorrhizal metablome.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, P. E.; Sreedasyam, A.; Trivedi, G

    Mycorrhizae, symbiotic interactions between soil fungi and tree roots, are ubiquitous in terrestrial ecosystems. The fungi contribute phosphorous, nitrogen and mobilized nutrients from organic matter in the soil and in return the fungus receives photosynthetically-derived carbohydrates. This union of plant and fungal metabolisms is the mycorrhizal metabolome. Understanding this symbiotic relationship at a molecular level provides important contributions to the understanding of forest ecosystems and global carbon cycling. We generated next generation short-read transcriptomic sequencing data from fully-formed ectomycorrhizae between Laccaria bicolor and aspen (Populus tremuloides) roots. The transcriptomic data was used to identify statistically significantly expressed gene models usingmore » a bootstrap-style approach, and these expressed genes were mapped to specific metabolic pathways. Integration of expressed genes that code for metabolic enzymes and the set of expressed membrane transporters generates a predictive model of the ectomycorrhizal metabolome. The generated model of mycorrhizal metabolome predicts that the specific compounds glycine, glutamate, and allantoin are synthesized by L. bicolor and that these compounds or their metabolites may be used for the benefit of aspen in exchange for the photosynthetically-derived sugars fructose and glucose. The analysis illustrates an approach to generate testable biological hypotheses to investigate the complex molecular interactions that drive ectomycorrhizal symbiosis. These models are consistent with experimental environmental data and provide insight into the molecular exchange processes for organisms in this complex ecosystem. The method used here for predicting metabolomic models of mycorrhizal systems from deep RNA sequencing data can be generalized and is broadly applicable to transcriptomic data derived from complex systems.« less

  10. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other’s Actions by Humans

    PubMed Central

    Ganesh, Gowrishankar

    2017-01-01

    Abstract The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants’ ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert’s abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert’s self-estimation is explained only by considering a change in the individual’s forward model, showing that an improvement in an expert’s ability to predict outcomes of observed actions affects the individual’s forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions. PMID:29340300

  11. Predictive models of radiative neutrino masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julio, J., E-mail: julio@lipi.go.id

    2016-06-21

    We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.

  12. Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)

    2001-01-01

    Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.

  13. Deriving Points of Departure and Performance Baselines for Predictive Modeling of Systemic Toxicity using ToxRefDB (SOT)

    EPA Science Inventory

    A primary goal of computational toxicology is to generate predictive models of toxicity. An elusive target of alternative test methods and models has been the accurate prediction of systemic toxicity points of departure (PoD). We aim not only to provide a large and valuable resou...

  14. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Treesearch

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  15. Natural analogs in the petroleum industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, J.R.

    1995-09-01

    This article describes the use of natural analogues in petroleum exploration and includes numerous geologic model descriptions which have historically been used in the prediction of geometries and location of oil and gas accumulations. These geologic models have been passed down to and used by succeeding generations of petroleum geologists. Some examples of these geologic models include the Allan fault-plane model, porosity prediction, basin modelling, prediction of basin compartmentalization, and diagenesis.

  16. Silent Expectations: Dynamic Causal Modeling of Cortical Prediction and Attention to Sounds That Weren't.

    PubMed

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Shtyrov, Yury; Bekinschtein, Tristan A; Henson, Richard

    2016-08-10

    There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called "mismatch response"). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an "omission" response). This situation arguably provides a more direct measure of "top-down" predictions in the absence of confounding "bottom-up" input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of "bottom-up" stimuli with the presence versus absence of "top-down" attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward "prediction" connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction. Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known "mismatch response." But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain's electromagnetic activity, we show that it also generates an "omission response" that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain's prediction of the future. Copyright © 2016 Chennu et al.

  17. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  18. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less

  19. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  20. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  1. Human and Server Docking Prediction for CAPRI Round 30–35 Using LZerD with Combined Scoring Functions

    PubMed Central

    Peterson, Lenna X.; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2016-01-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues’ spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, i.e. whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. PMID:27654025

  2. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  3. Wind farms production: Control and prediction

    NASA Astrophysics Data System (ADS)

    El-Fouly, Tarek Hussein Mostafa

    Wind energy resources, unlike dispatchable central station generation, produce power dependable on external irregular source and that is the incident wind speed which does not always blow when electricity is needed. This results in the variability, unpredictability, and uncertainty of wind resources. Therefore, the integration of wind facilities to utility electrical grid presents a major challenge to power system operator. Such integration has significant impact on the optimum power flow, transmission congestion, power quality issues, system stability, load dispatch, and economic analysis. Due to the irregular nature of wind power production, accurate prediction represents the major challenge to power system operators. Therefore, in this thesis two novel models are proposed for wind speed and wind power prediction. One proposed model is dedicated to short-term prediction (one-hour ahead) and the other involves medium term prediction (one-day ahead). The accuracy of the proposed models is revealed by comparing their results with the corresponding values of a reference prediction model referred to as the persistent model. Utility grid operation is not only impacted by the uncertainty of the future production of wind farms, but also by the variability of their current production and how the active and reactive power exchange with the grid is controlled. To address this particular task, a control technique for wind turbines, driven by doubly-fed induction generators (DFIGs), is developed to regulate the terminal voltage by equally sharing the generated/absorbed reactive power between the rotor-side and the gridside converters. To highlight the impact of the new developed technique in reducing the power loss in the generator set, an economic analysis is carried out. Moreover, a new aggregated model for wind farms is proposed that accounts for the irregularity of the incident wind distribution throughout the farm layout. Specifically, this model includes the wake effect and the time delay of the incident wind speed of the different turbines on the farm, and to simulate the fluctuation in the generated power more accurately and more closer to real-time operation. Recently, wind farms with considerable output power ratings have been installed. Their integrating into the utility grid will substantially affect the electricity markets. This thesis investigates the possible impact of wind power variability, wind farm control strategy, wind energy penetration level, wind farm location, and wind power prediction accuracy on the total generation costs and close to real time electricity market prices. These issues are addressed by developing a single auction market model for determining the real-time electricity market prices.

  4. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    NASA Astrophysics Data System (ADS)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  5. Building a Better Fragment Library for De Novo Protein Structure Prediction

    PubMed Central

    de Oliveira, Saulo H. P.; Shi, Jiye; Deane, Charlotte M.

    2015-01-01

    Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10). We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. “Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources”. PMID:25901595

  6. SU-E-J-234: Application of a Breathing Motion Model to ViewRay Cine MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connell, D. P.; Thomas, D. H.; Dou, T. H.

    2015-06-15

    Purpose: A respiratory motion model previously used to generate breathing-gated CT images was used with cine MR images. Accuracy and predictive ability of the in-plane models were evaluated. Methods: Sagittalplane cine MR images of a patient undergoing treatment on a ViewRay MRI/radiotherapy system were acquired before and during treatment. Images were acquired at 4 frames/second with 3.5 × 3.5 mm resolution and a slice thickness of 5 mm. The first cine frame was deformably registered to following frames. Superior/inferior component of the tumor centroid position was used as a breathing surrogate. Deformation vectors and surrogate measurements were used to determinemore » motion model parameters. Model error was evaluated and subsequent treatment cines were predicted from breathing surrogate data. A simulated CT cine was created by generating breathing-gated volumetric images at 0.25 second intervals along the measured breathing trace, selecting a sagittal slice and downsampling to the resolution of the MR cines. A motion model was built using the first half of the simulated cine data. Model accuracy and error in predicting the remaining frames of the cine were evaluated. Results: Mean difference between model predicted and deformably registered lung tissue positions for the 28 second preview MR cine acquired before treatment was 0.81 +/− 0.30 mm. The model was used to predict two minutes of the subsequent treatment cine with a mean accuracy of 1.59 +/− 0.63 mm. Conclusion: Inplane motion models were built using MR cine images and evaluated for accuracy and ability to predict future respiratory motion from breathing surrogate measurements. Examination of long term predictive ability is ongoing. The technique was applied to simulated CT cines for further validation, and the authors are currently investigating use of in-plane models to update pre-existing volumetric motion models used for generation of breathing-gated CT planning images.« less

  7. Network approaches for expert decisions in sports.

    PubMed

    Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus

    2012-04-01

    This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Genomic Prediction Accounting for Residual Heteroskedasticity

    PubMed Central

    Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.

    2015-01-01

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950

  9. Prospects for Genomic Selection in Cassava Breeding.

    PubMed

    Wolfe, Marnin D; Del Carpio, Dunia Pino; Alabi, Olumide; Ezenwaka, Lydia C; Ikeogu, Ugochukwu N; Kayondo, Ismail S; Lozano, Roberto; Okeke, Uche G; Ozimati, Alfred A; Williams, Esuma; Egesi, Chiedozie; Kawuki, Robert S; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc

    2017-11-01

    Cassava ( Crantz) is a clonally propagated staple food crop in the tropics. Genomic selection (GS) has been implemented at three breeding institutions in Africa to reduce cycle times. Initial studies provided promising estimates of predictive abilities. Here, we expand on previous analyses by assessing the accuracy of seven prediction models for seven traits in three prediction scenarios: cross-validation within populations, cross-population prediction and cross-generation prediction. We also evaluated the impact of increasing the training population (TP) size by phenotyping progenies selected either at random or with a genetic algorithm. Cross-validation results were mostly consistent across programs, with nonadditive models predicting of 10% better on average. Cross-population accuracy was generally low (mean = 0.18) but prediction of cassava mosaic disease increased up to 57% in one Nigerian population when data from another related population were combined. Accuracy across generations was poorer than within-generation accuracy, as expected, but accuracy for dry matter content and mosaic disease severity should be sufficient for rapid-cycling GS. Selection of a prediction model made some difference across generations, but increasing TP size was more important. With a genetic algorithm, selection of one-third of progeny could achieve an accuracy equivalent to phenotyping all progeny. We are in the early stages of GS for this crop but the results are promising for some traits. General guidelines that are emerging are that TPs need to continue to grow but phenotyping can be done on a cleverly selected subset of individuals, reducing the overall phenotyping burden. Copyright © 2017 Crop Science Society of America.

  10. Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes.

    PubMed

    Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan

    2005-08-01

    This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).

  11. Predicting intensity ranks of peptide fragment ions.

    PubMed

    Frank, Ari M

    2009-05-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.

  12. Predicting Intensity Ranks of Peptide Fragment Ions

    PubMed Central

    Frank, Ari M.

    2009-01-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476

  13. The Motivational Determinants of Task Performance in a Non-Industrial Milieu: A Modification and Extension of Vroom's Model.

    ERIC Educational Resources Information Center

    Mendel, Raymond M.; Dickinson, Terry L.

    Vroom's cognitive model, which proposes to both explain and predict an individual's level of work productivity by drawing on the construct motivation, is discussed and three hypotheses generated: (1) that Vroom's model does predict performance in a non-industrial setting; (2) that it predicts self-perceived performance better than measures…

  14. Madden–Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer

    PubMed Central

    Miyakawa, Tomoki; Satoh, Masaki; Miura, Hiroaki; Tomita, Hirofumi; Yashiro, Hisashi; Noda, Akira T.; Yamada, Yohei; Kodama, Chihiro; Kimoto, Masahide; Yoneyama, Kunio

    2014-01-01

    Global cloud/cloud system-resolving models are perceived to perform well in the prediction of the Madden–Julian Oscillation (MJO), a huge eastward -propagating atmospheric pulse that dominates intraseasonal variation of the tropics and affects the entire globe. However, owing to model complexity, detailed analysis is limited by computational power. Here we carry out a simulation series using a recently developed supercomputer, which enables the statistical evaluation of the MJO prediction skill of a costly new-generation model in a manner similar to operational forecast models. We estimate the current MJO predictability of the model as 27 days by conducting simulations including all winter MJO cases identified during 2003–2012. The simulated precipitation patterns associated with different MJO phases compare well with observations. An MJO case captured in a recent intensive observation is also well reproduced. Our results reveal that the global cloud-resolving approach is effective in understanding the MJO and in providing month-long tropical forecasts. PMID:24801254

  15. Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers

    NASA Astrophysics Data System (ADS)

    Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan

    2007-09-01

    Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.

  16. Madden-Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer.

    PubMed

    Miyakawa, Tomoki; Satoh, Masaki; Miura, Hiroaki; Tomita, Hirofumi; Yashiro, Hisashi; Noda, Akira T; Yamada, Yohei; Kodama, Chihiro; Kimoto, Masahide; Yoneyama, Kunio

    2014-05-06

    Global cloud/cloud system-resolving models are perceived to perform well in the prediction of the Madden-Julian Oscillation (MJO), a huge eastward -propagating atmospheric pulse that dominates intraseasonal variation of the tropics and affects the entire globe. However, owing to model complexity, detailed analysis is limited by computational power. Here we carry out a simulation series using a recently developed supercomputer, which enables the statistical evaluation of the MJO prediction skill of a costly new-generation model in a manner similar to operational forecast models. We estimate the current MJO predictability of the model as 27 days by conducting simulations including all winter MJO cases identified during 2003-2012. The simulated precipitation patterns associated with different MJO phases compare well with observations. An MJO case captured in a recent intensive observation is also well reproduced. Our results reveal that the global cloud-resolving approach is effective in understanding the MJO and in providing month-long tropical forecasts.

  17. Assessment of a remote sensing-based model for predicting malaria transmission risk in villages of Chiapas, Mexico

    NASA Technical Reports Server (NTRS)

    Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.

    1997-01-01

    A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.

  18. Forecasting municipal solid waste generation using prognostic tools and regression analysis.

    PubMed

    Ghinea, Cristina; Drăgoi, Elena Niculina; Comăniţă, Elena-Diana; Gavrilescu, Marius; Câmpean, Teofil; Curteanu, Silvia; Gavrilescu, Maria

    2016-11-01

    For an adequate planning of waste management systems the accurate forecast of waste generation is an essential step, since various factors can affect waste trends. The application of predictive and prognosis models are useful tools, as reliable support for decision making processes. In this paper some indicators such as: number of residents, population age, urban life expectancy, total municipal solid waste were used as input variables in prognostic models in order to predict the amount of solid waste fractions. We applied Waste Prognostic Tool, regression analysis and time series analysis to forecast municipal solid waste generation and composition by considering the Iasi Romania case study. Regression equations were determined for six solid waste fractions (paper, plastic, metal, glass, biodegradable and other waste). Accuracy Measures were calculated and the results showed that S-curve trend model is the most suitable for municipal solid waste (MSW) prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Use of mobile and passive badge air monitoring data for NOX and ozone air pollution spatial exposure prediction models.

    PubMed

    Xu, Wei; Riley, Erin A; Austin, Elena; Sasakura, Miyoko; Schaal, Lanae; Gould, Timothy R; Hartin, Kris; Simpson, Christopher D; Sampson, Paul D; Yost, Michael G; Larson, Timothy V; Xiu, Guangli; Vedal, Sverre

    2017-03-01

    Air pollution exposure prediction models can make use of many types of air monitoring data. Fixed location passive samples typically measure concentrations averaged over several days to weeks. Mobile monitoring data can generate near continuous concentration measurements. It is not known whether mobile monitoring data are suitable for generating well-performing exposure prediction models or how they compare with other types of monitoring data in generating exposure models. Measurements from fixed site passive samplers and mobile monitoring platform were made over a 2-week period in Baltimore in the summer and winter months in 2012. Performance of exposure prediction models for long-term nitrogen oxides (NO X ) and ozone (O 3 ) concentrations were compared using a state-of-the-art approach for model development based on land use regression (LUR) and geostatistical smoothing. Model performance was evaluated using leave-one-out cross-validation (LOOCV). Models performed well using the mobile peak traffic monitoring data for both NO X and O 3 , with LOOCV R 2 s of 0.70 and 0.71, respectively, in the summer, and 0.90 and 0.58, respectively, in the winter. Models using 2-week passive samples for NO X had LOOCV R 2 s of 0.60 and 0.65 in the summer and winter months, respectively. The passive badge sampling data were not adequate for developing models for O 3 . Mobile air monitoring data can be used to successfully build well-performing LUR exposure prediction models for NO X and O 3 and are a better source of data for these models than 2-week passive badge data.

  20. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  1. Virulo

    EPA Science Inventory

    Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...

  2. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  3. A novel methodology to estimate the evolution of construction waste in construction sites.

    PubMed

    Katz, Amnon; Baum, Hadassa

    2011-02-01

    This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  5. Selective interference with image retention and generation: evidence for the workspace model.

    PubMed

    van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio

    2009-08-01

    We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.

  6. A PROBABILISTIC POPULATION EXPOSURE MODEL FOR PM10 AND PM 2.5

    EPA Science Inventory

    A first generation probabilistic population exposure model for Particulate Matter (PM), specifically for predicting PM10, and PM2.5, exposures of an urban, population has been developed. This model is intended to be used to predict exposure (magnitude, frequency, and duration) ...

  7. A novel method for predicting the power outputs of wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, Yingguang

    2018-03-01

    This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.

  8. Numerical Simulations of Vortex Generator Vanes and Jets on a Flat Plate

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Yao, Chung-Sheng; Lin, John C.

    2002-01-01

    Numerical simulations of a single low-profile vortex generator vane, which is only a small fraction of the boundary-layer thickness, and a vortex generating jet have been performed for flows over a flat plate. The numerical simulations were computed by solving the steady-state solution to the Reynolds-averaged Navier-Stokes equations. The vortex generating vane results were evaluated by comparing the strength and trajectory of the streamwise vortex to experimental particle image velocimetry measurements. From the numerical simulations of the vane case, it was observed that the Shear-Stress Transport (SST) turbulence model resulted in a better prediction of the streamwise peak vorticity and trajectory when compared to the Spalart-Allmaras (SA) turbulence model. It is shown in this investigation that the estimation of the turbulent eddy viscosity near the vortex core, for both the vane and jet simulations, was higher for the SA model when compared to the SST model. Even though the numerical simulations of the vortex generating vane were able to predict the trajectory of the stream-wise vortex, the initial magnitude and decay of the peak streamwise vorticity were significantly under predicted. A comparison of the positive circulation associated with the streamwise vortex showed that while the numerical simulations produced a more diffused vortex, the vortex strength compared very well to the experimental observations. A grid resolution study for the vortex generating vane was also performed showing that the diffusion of the vortex was not a result of insufficient grid resolution. Comparisons were also made between a fully modeled trapezoidal vane with finite thickness to a simply modeled rectangular thin vane. The comparisons showed that the simply modeled rectangular vane produced a streamwise vortex which had a strength and trajectory very similar to the fully modeled trapezoidal vane.

  9. Human and server docking prediction for CAPRI round 30-35 using LZerD with combined scoring functions.

    PubMed

    Peterson, Lenna X; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2017-03-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues' spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, that is whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. Proteins 2017; 85:513-527. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. The association between early generative concern and caregiving with friends from early to middle adolescence.

    PubMed

    Lawford, Heather L; Doyle, Anna-Beth; Markiewicz, Dorothy

    2013-12-01

    Generativity, defined as concern for future generations, is theorized to become a priority in midlife, preceded by a stage in which intimacy is the central issue. Recent research, however, has found evidence of generativity even in adolescence. This longitudinal study explored the associations between caregiving in friendships, closely related to intimacy, and early generative concern in a young adolescent sample. Given the importance of close friendships in adolescence, it was hypothesized that responsive caregiving in early adolescent friendships would predict later generative concern. Approximately 140 adolescents (56 % female, aged 14 at Time 1) completed questionnaires regarding generative concern and responsive caregiving with friends yearly across 2 years. Structural equation modeling revealed that caregiving predicted generative concern 1 year later but generative concern did not predict later caregiving. These results suggest that caregiving in close friendships plays an important role in the development of adolescents' motivation to contribute to future generations.

  11. Genetic determinants of freckle occurrence in the Spanish population: Towards ephelides prediction from human DNA samples.

    PubMed

    Hernando, Barbara; Ibañez, Maria Victoria; Deserio-Cuesta, Julio Alberto; Soria-Navarro, Raquel; Vilar-Sastre, Inca; Martinez-Cadenas, Conrado

    2018-03-01

    Prediction of human pigmentation traits, one of the most differentiable externally visible characteristics among individuals, from biological samples represents a useful tool in the field of forensic DNA phenotyping. In spite of freckling being a relatively common pigmentation characteristic in Europeans, little is known about the genetic basis of this largely genetically determined phenotype in southern European populations. In this work, we explored the predictive capacity of eight freckle and sunlight sensitivity-related genes in 458 individuals (266 non-freckled controls and 192 freckled cases) from Spain. Four loci were associated with freckling (MC1R, IRF4, ASIP and BNC2), and female sex was also found to be a predictive factor for having a freckling phenotype in our population. After identifying the most informative genetic variants responsible for human ephelides occurrence in our sample set, we developed a DNA-based freckle prediction model using a multivariate regression approach. Once developed, the capabilities of the prediction model were tested by a repeated 10-fold cross-validation approach. The proportion of correctly predicted individuals using the DNA-based freckle prediction model was 74.13%. The implementation of sex into the DNA-based freckle prediction model slightly improved the overall prediction accuracy by 2.19% (76.32%). Further evaluation of the newly-generated prediction model was performed by assessing the model's performance in a new cohort of 212 Spanish individuals, reaching a classification success rate of 74.61%. Validation of this prediction model may be carried out in larger populations, including samples from different European populations. Further research to validate and improve this newly-generated freckle prediction model will be needed before its forensic application. Together with DNA tests already validated for eye and hair colour prediction, this freckle prediction model may lead to a substantially more detailed physical description of unknown individuals from DNA found at the crime scene. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Software Tools for Weed Seed Germination Modeling

    USDA-ARS?s Scientific Manuscript database

    The next generation of weed seed germination models will need to account for variable soil microclimate conditions. In order to predict this microclimate environment we have developed a suite of individual tools (models) that can be used in conjunction with the next generation of weed seed germinati...

  13. Using Toxicological Evidence from QSAR Models in Practice

    EPA Science Inventory

    The new generation of QSAR models provides supporting documentation in addition to the predicted toxicological value. Such information enables the toxicologist to explore the properties of chemical substances and to review and increase the reliability of toxicity predictions. Thi...

  14. Seven lessons from manyfield inflation in random potentials

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2018-01-01

    We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.

  15. gCUP: rapid GPU-based HIV-1 co-receptor usage prediction for next-generation sequencing.

    PubMed

    Olejnik, Michael; Steuwer, Michel; Gorlatch, Sergei; Heider, Dominik

    2014-11-15

    Next-generation sequencing (NGS) has a large potential in HIV diagnostics, and genotypic prediction models have been developed and successfully tested in the recent years. However, albeit being highly accurate, these computational models lack computational efficiency to reach their full potential. In this study, we demonstrate the use of graphics processing units (GPUs) in combination with a computational prediction model for HIV tropism. Our new model named gCUP, parallelized and optimized for GPU, is highly accurate and can classify >175 000 sequences per second on an NVIDIA GeForce GTX 460. The computational efficiency of our new model is the next step to enable NGS technologies to reach clinical significance in HIV diagnostics. Moreover, our approach is not limited to HIV tropism prediction, but can also be easily adapted to other settings, e.g. drug resistance prediction. The source code can be downloaded at http://www.heiderlab.de d.heider@wz-straubing.de. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. A MECHANISTIC MODEL FOR MERCURY CAPTURE WITH IN-SITU GENERATED TITANIA PARTICLES: ROLE OF WATER VAPOR

    EPA Science Inventory

    A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...

  17. A sampling-based method for ranking protein structural models by integrating multiple scores and features.

    PubMed

    Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong

    2011-09-01

    One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.

  18. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  19. Heat generation in Aircraft tires under yawed rolling conditions

    NASA Technical Reports Server (NTRS)

    Dodge, Richard N.; Clark, Samuel K.

    1987-01-01

    An analytical model was developed for approximating the internal temperature distribution in an aircraft tire operating under conditions of yawed rolling. The model employs an assembly of elements to represent the tire cross section and treats the heat generated within the tire as a function of the change in strain energy associated with predicted tire flexure. Special contact scrubbing terms are superimposed on the symmetrical free rolling model to account for the slip during yawed rolling. An extensive experimental program was conducted to verify temperatures predicted from the analytical model. Data from this program were compared with calculation over a range of operating conditions, namely, vertical deflection, inflation pressure, yaw angle, and direction of yaw. Generally the analytical model predicted overall trends well and correlated reasonably well with individual measurements at locations throughout the cross section.

  20. Genomic Prediction Accounting for Residual Heteroskedasticity.

    PubMed

    Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M

    2015-11-12

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.

  1. Super short term forecasting of photovoltaic power generation output in micro grid

    NASA Astrophysics Data System (ADS)

    Gong, Cheng; Ma, Longfei; Chi, Zhongjun; Zhang, Baoqun; Jiao, Ran; Yang, Bing; Chen, Jianshu; Zeng, Shuang

    2017-01-01

    The prediction model combining data mining and support vector machine (SVM) was built. Which provide information of photovoltaic (PV) power generation output for economic operation and optimal control of micro gird, and which reduce influence of power system from PV fluctuation. Because of the characteristic which output of PV rely on radiation intensity, ambient temperature, cloudiness, etc., so data mining was brought in. This technology can deal with large amounts of historical data and eliminate superfluous data, by using fuzzy classifier of daily type and grey related degree. The model of SVM was built, which can dock with information from data mining. Based on measured data from a small PV station, the prediction model was tested. The numerical example shows that the prediction model is fast and accurate.

  2. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  3. The use of artificial neural networks and multiple linear regression to predict rate of medical waste generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari

    2009-11-15

    Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performancemore » of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.« less

  4. Landscape structure and management alter the outcome of a pesticide ERA: Evaluating impacts of endocrine disruption using the ALMaSS European Brown Hare model.

    PubMed

    Topping, Chris J; Dalby, Lars; Skov, Flemming

    2016-01-15

    There is a gradual change towards explicitly considering landscapes in regulatory risk assessment. To realise the objective of developing representative scenarios for risk assessment it is necessary to know how detailed a landscape representation is needed to generate a realistic risk assessment, and indeed how to generate such landscapes. This paper evaluates the contribution of landscape and farming components to a model based risk assessment of a fictitious endocrine disruptor on hares. In addition, we present methods and code examples for generation of landscape structures and farming simulation from data collected primarily for EU agricultural subsidy support and GIS map data. Ten different Danish landscapes were generated and the ERA carried out for each landscape using two different assumed toxicities. The results showed negative impacts in all cases, but the extent and form in terms of impacts on abundance or occupancy differed greatly between landscapes. A meta-model was created, predicting impact from landscape and farming characteristics. Scenarios based on all combinations of farming and landscape for five landscapes representing extreme and middle impacts were created. The meta-models developed from the 10 real landscapes failed to predict impacts for these 25 scenarios. Landscape, farming, and the emergent density of hares all influenced the results of the risk assessment considerably. The study indicates that prediction of a reasonable worst case scenario is difficult from structural, farming or population metrics; rather the emergent properties generated from interactions between landscape, management and ecology are needed. Meta-modelling may also fail to predict impacts, even when restricting inputs to combinations of those used to create the model. Future ERA may therefore need to make use of multiple scenarios representing a wide range of conditions to avoid locally unacceptable risks. This approach could now be feasible Europe wide given the landscape generation methods presented.

  5. International Geomagnetic Reference Field: the 12th generation

    NASA Astrophysics Data System (ADS)

    Thébault, Erwan; Finlay, Christopher C.; Beggan, Ciarán D.; Alken, Patrick; Aubert, Julien; Barrois, Olivier; Bertrand, Francois; Bondar, Tatiana; Boness, Axel; Brocco, Laura; Canet, Elisabeth; Chambodut, Aude; Chulliat, Arnaud; Coïsson, Pierdavide; Civet, François; Du, Aimin; Fournier, Alexandre; Fratter, Isabelle; Gillet, Nicolas; Hamilton, Brian; Hamoudi, Mohamed; Hulot, Gauthier; Jager, Thomas; Korte, Monika; Kuang, Weijia; Lalanne, Xavier; Langlais, Benoit; Léger, Jean-Michel; Lesur, Vincent; Lowes, Frank J.; Macmillan, Susan; Mandea, Mioara; Manoj, Chandrasekharan; Maus, Stefan; Olsen, Nils; Petrov, Valeriy; Ridley, Victoria; Rother, Martin; Sabaka, Terence J.; Saturnino, Diana; Schachtschneider, Reyko; Sirol, Olivier; Tangborn, Andrew; Thomson, Alan; Tøffner-Clausen, Lars; Vigneron, Pierre; Wardinski, Ingo; Zvereva, Tatiana

    2015-05-01

    The 12th generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2014 by the Working Group V-MOD appointed by the International Association of Geomagnetism and Aeronomy (IAGA). It updates the previous IGRF generation with a definitive main field model for epoch 2010.0, a main field model for epoch 2015.0, and a linear annual predictive secular variation model for 2015.0-2020.0. Here, we present the equations defining the IGRF model, provide the spherical harmonic coefficients, and provide maps of the magnetic declination, inclination, and total intensity for epoch 2015.0 and their predicted rates of change for 2015.0-2020.0. We also update the magnetic pole positions and discuss briefly the latest changes and possible future trends of the Earth's magnetic field.

  6. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  7. The wandering self: Tracking distracting self-generated thought in a cognitively demanding context.

    PubMed

    Huijser, Stefan; van Vugt, Marieke K; Taatgen, Niels A

    2018-02-01

    We investigated how self-referential processing (SRP) affected self-generated thought in a complex working memory task (CWM) to test the predictions of a computational cognitive model. This model described self-generated thought as resulting from competition between task- and distracting processes, and predicted that self-generated thought interferes with rehearsal, reducing memory performance. SRP was hypothesized to influence this goal competition process by encouraging distracting self-generated thinking. We used a spatial CWM task to examine if SRP instigated such thoughts, and employed eye-tracking to examine rehearsal interference in eye-movement and self-generated thinking in pupil size. The results showed that SRP was associated with lower performance and higher rates of self-generated thought. Self-generated thought was associated with less rehearsal and we observed a smaller pupil size for mind wandering. We conclude that SRP can instigate self-generated thought and that goal competition provides a likely explanation for how self-generated thoughts arises in a demanding task. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Comparing predictions of extinction risk using models and subjective judgement

    NASA Astrophysics Data System (ADS)

    McCarthy, Michael A.; Keith, David; Tietjen, Justine; Burgman, Mark A.; Maunder, Mark; Master, Larry; Brook, Barry W.; Mace, Georgina; Possingham, Hugh P.; Medellin, Rodrigo; Andelman, Sandy; Regan, Helen; Regan, Tracey; Ruckelshaus, Mary

    2004-10-01

    Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models.

  9. Progress on Implementing Additional Physics Schemes into MPAS-A v5.1 for Next Generation Air Quality Modeling

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...

  10. Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.

    2005-01-01

    The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.

  11. Application of Earth Sciences Products for use in Next Generation Numerical Aerosol Prediction Models

    DTIC Science & Technology

    2008-09-30

    retrievals, Geophysical Research Abstracts, Vol. 10, EGU2008-A-11193, 2008, SRef-ID: 1607-7962/gra/EGU2008-A­ 11193, EGU General Assembly 2008. Liu, M...Application of Earth Sciences Products for use in Next Generation Numerical Aerosol...can be generated and predicted. Through this system, we will be able to advance a number of US Navy Applied Science needs in the areas of improved

  12. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  13. PREDICTIVE MODELING OF ACOUSTIC SIGNALS FROM THERMOACOUSTIC POWER SENSORS (TAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumm, Christopher M.; Vipperman, Jeffrey S.

    2016-06-30

    Thermoacoustic Power Sensor (TAPS) technology offers the potential for self-powered, wireless measurement of nuclear reactor core operating conditions. TAPS are based on thermoacoustic engines, which harness thermal energy from fission reactions to generate acoustic waves by virtue of gas motion through a porous stack of thermally nonconductive material. TAPS can be placed in the core, where they generate acoustic waves whose frequency and amplitude are proportional to the local temperature and radiation flux, respectively. TAPS acoustic signals are not measured directly at the TAPS; rather, they propagate wirelessly from an individual TAPS through the reactor, and ultimately to a low-powermore » receiver network on the vessel’s exterior. In order to rely on TAPS as primary instrumentation, reactor-specific models which account for geometric/acoustic complexities in the signal propagation environment must be used to predict the amplitude and frequency of TAPS signals at receiver locations. The reactor state may then be derived by comparing receiver signals to the reference levels established by predictive modeling. In this paper, we develop and experimentally benchmark a methodology for predictive modeling of the signals generated by a TAPS system, with the intent of subsequently extending these efforts to modeling of TAPS in a liquid sodium environmen« less

  14. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  15. Impact of metal ionic characteristics on adsorption potential of Ficus carica leaves using QSPR modeling.

    PubMed

    Batool, Fozia; Iqbal, Shahid; Akbar, Jamshed

    2018-04-03

    The present study describes Quantitative Structure Property Relationship (QSPR) modeling to relate metal ions characteristics with adsorption potential of Ficus carica leaves for 13 selected metal ions (Ca +2 , Cr +3 , Co +2 , Cu +2 , Cd +2 , K +1 , Mg +2 , Mn +2 , Na +1 , Ni +2 , Pb +2 , Zn +2 , and Fe +2 ) to generate QSPR model. A set of 21 characteristic descriptors were selected and relationship of these metal characteristics with adsorptive behavior of metal ions was investigated. Stepwise Multiple Linear Regression (SMLR) analysis and Artificial Neural Network (ANN) were applied for descriptors selection and model generation. Langmuir and Freundlich isotherms were also applied on adsorption data to generate proper correlation for experimental findings. Model generated indicated covalent index as the most significant descriptor, which is responsible for more than 90% predictive adsorption (α = 0.05). Internal validation of model was performed by measuring [Formula: see text] (0.98). The results indicate that present model is a useful tool for prediction of adsorptive behavior of different metal ions based on their ionic characteristics.

  16. Geospatial application of the Water Erosion Prediction Project (WEPP) Model

    Treesearch

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2011-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...

  17. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  18. NOAA's weather forecasts go hyper-local with next-generation weather

    Science.gov Websites

    model NOAA HOME WEATHER OCEANS FISHERIES CHARTING SATELLITES CLIMATE RESEARCH COASTS CAREERS with next-generation weather model New model will help forecasters predict a storm's path, timing and intensity better than ever September 30, 2014 This is a comparison of two weather forecast models looking

  19. Implementing subgrid-scale cloudiness into the Model for Prediction Across Scales-Atmosphere (MPAS-A) for next generation global air quality modeling

    EPA Science Inventory

    A next generation air quality modeling system is being developed at the U.S. EPA to enable seamless modeling of air quality from global to regional to (eventually) local scales. State of the science chemistry and aerosol modules from the Community Multiscale Air Quality (CMAQ) mo...

  20. Phase-field based Multiscale Modeling of Heterogeneous Solid Electrolytes: Applications to Nanoporous Li 3 PS 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou

    Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less

  1. Phase-field based Multiscale Modeling of Heterogeneous Solid Electrolytes: Applications to Nanoporous Li 3 PS 4

    DOE PAGES

    Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou; ...

    2017-09-07

    Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less

  2. Markovian prediction of future values for food grains in the economic survey

    NASA Astrophysics Data System (ADS)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.

  3. Reverse engineering systems models of regulation: discovery, prediction and mechanisms.

    PubMed

    Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S

    2012-08-01

    Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Bioinactivation: Software for modelling dynamic microbial inactivation.

    PubMed

    Garre, Alberto; Fernández, Pablo S; Lindqvist, Roland; Egea, Jose A

    2017-03-01

    This contribution presents the bioinactivation software, which implements functions for the modelling of isothermal and non-isothermal microbial inactivation. This software offers features such as user-friendliness, modelling of dynamic conditions, possibility to choose the fitting algorithm and generation of prediction intervals. The software is offered in two different formats: Bioinactivation core and Bioinactivation SE. Bioinactivation core is a package for the R programming language, which includes features for the generation of predictions and for the fitting of models to inactivation experiments using non-linear regression or a Markov Chain Monte Carlo algorithm (MCMC). The calculations are based on inactivation models common in academia and industry (Bigelow, Peleg, Mafart and Geeraerd). Bioinactivation SE supplies a user-friendly interface to selected functions of Bioinactivation core, namely the model fitting of non-isothermal experiments and the generation of prediction intervals. The capabilities of bioinactivation are presented in this paper through a case study, modelling the non-isothermal inactivation of Bacillus sporothermodurans. This study has provided a full characterization of the response of the bacteria to dynamic temperature conditions, including confidence intervals for the model parameters and a prediction interval of the survivor curve. We conclude that the MCMC algorithm produces a better characterization of the biological uncertainty and variability than non-linear regression. The bioinactivation software can be relevant to the food and pharmaceutical industry, as well as to regulatory agencies, as part of a (quantitative) microbial risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A new model for force generation by skeletal muscle, incorporating work-dependent deactivation

    PubMed Central

    Williams, Thelma L.

    2010-01-01

    A model is developed to predict the force generated by active skeletal muscle when subjected to imposed patterns of lengthening and shortening, such as those that occur during normal movements. The model is based on data from isolated lamprey muscle and can predict the forces developed during swimming. The model consists of a set of ordinary differential equations, which are solved numerically. The model's first part is a simplified description of the kinetics of Ca2+ release from sarcoplasmic reticulum and binding to muscle protein filaments, in response to neural activation. The second part is based on A. V. Hill's mechanical model of muscle, consisting of elastic and contractile elements in series, the latter obeying known physiological properties. The parameters of the model are determined by fitting the appropriate mathematical solutions to data recorded from isolated lamprey muscle activated under conditions of constant length or rate of change of length. The model is then used to predict the forces developed under conditions of applied sinusoidal length changes, and the results compared with corresponding data. The most significant advance of this model is the incorporation of work-dependent deactivation, whereby a muscle that has been shortening under load generates less force after the shortening ceases than otherwise expected. In addition, the stiffness in this model is not constant but increases with increasing activation. The model yields a closer prediction to data than has been obtained before, and can thus prove an important component of investigations of the neural—mechanical—environmental interactions that occur during natural movements. PMID:20118315

  6. A probabilistic approach to photovoltaic generator performance prediction

    NASA Astrophysics Data System (ADS)

    Khallat, M. A.; Rahman, S.

    1986-09-01

    A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.

  7. Prediction of Mass Wasting, Erosion, and Sediment Transport With the Distributed Hydrology-Soil-Vegetation Model

    NASA Astrophysics Data System (ADS)

    Doten, C. O.; Lanini, J. S.; Bowling, L. C.; Lettenmaier, D. P.

    2004-12-01

    Erosion and sediment transport in a temperate forested watershed are predicted with a new sediment module linked to the Distributed Hydrology-Soil-Vegetation Model (DHSVM). The DHSVM sediment module represents the main sources of sediment generation in forested environments: mass wasting, hillslope erosion and road surface erosion. It produces failures based on a factor-of-safety analysis with the infinite slope model through use of stochastically generated soil and vegetation parameters. Failed material is routed downslope with a rule-based scheme that determines sediment delivery to streams. Sediment from hillslopes and road surfaces is also transported to the channel network. Basin sediment yield is predicted with a simple channel sediment routing scheme. The model was applied to the Rainy Creek catchment, a tributary of the Wenatchee River which drains the east slopes of the Cascade Mountains, and Hard and Ware Creeks on the west slopes of the Cascades. In these initial applications, the model produced plausible sediment yield and ratios of landsliding and surface erosion , when compared to published rates for similar catchments in the Pacific Northwest. We have also used the model to examine the implications of fires and logging road removal on sediment generation in the Rainy Creek catchment. Generally, in absolute value, the predicted changes (increased sediment generation) following fires, which are primarily associated with increased slope failures, are much larger than the modest changes (reductions in sediment yield) associated with road obliteration, although the small sensitivity to forest road obliteration may be due in part to the relatively low road density in the Rainy Creek catchment, and to mechanisms, such as culvert failure, that are not represented in the model.

  8. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  9. Feature selection for examining behavior by pathology laboratories.

    PubMed

    Hawkins, S; Williams, G; Baxter, R

    2001-08-01

    Australia has a universal health insurance scheme called Medicare, which is managed by Australia's Health Insurance Commission. Medicare payments for pathology services generate voluminous transaction data on patients, doctors and pathology laboratories. The Health Insurance Commission (HIC) currently uses predictive models to monitor compliance with regulatory requirements. The HIC commissioned a project to investigate the generation of new features from the data. Feature generation has not appeared as an important step in the knowledge discovery in databases (KDD) literature. New interesting features for use in predictive modeling are generated. These features were summarized, visualized and used as inputs for clustering and outlier detection methods. Data organization and data transformation methods are described for the efficient access and manipulation of these new features.

  10. Experimental prediction of tube support interaction characteristics in steam generators: Volume 2, Westinghouse Model 51 flow entrance region: Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haslinger, K.H.

    Tube-to-tube support interaction characterisitics were determined experimentally on a single tube, multi-span geometry, representative of the Westinghouse Model 51 steam generator economizer design. Results, in part, became input for an autoclave type wear test program on steam generator tubes, performed by Kraftwerk Union (KWU). More importantly, the test data reported here have been used to validate two analytical wear prediction codes; the WECAN code, which was developed by Westinghouse, and the ABAQUS code which has been enhanced for EPRI by Foster Wheeler to enable simulation of gap conditions (including fluid film effects) for various support geometries.

  11. Transitional flow in thin tubes for space station freedom radiator

    NASA Technical Reports Server (NTRS)

    Loney, Patrick; Ibrahim, Mounir

    1995-01-01

    A two dimensional finite volume method is used to predict the film coefficients in the transitional flow region (laminar or turbulent) for the radiator panel tubes. The code used to perform this analysis is CAST (Computer Aided Simulation of Turbulent Flows). The information gathered from this code is then used to augment a Sinda85 model that predicts overall performance of the radiator. A final comparison is drawn between the results generated with a Sinda85 model using the Sinda85 provided transition region heat transfer correlations and the Sinda85 model using the CAST generated data.

  12. Investigation into the propagation of Omega very low frequency signals and techniques for improvement of navigation accuracy including differential and composite omega

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.

  13. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2012-01-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  14. Method and device for predicting wavelength dependent radiation influences in thermal systems

    DOEpatents

    Kee, Robert J.; Ting, Aili

    1996-01-01

    A method and apparatus for predicting the spectral (wavelength-dependent) radiation transport in thermal systems including interaction by the radiation with partially transmitting medium. The predicted model of the thermal system is used to design and control the thermal system. The predictions are well suited to be implemented in design and control of rapid thermal processing (RTP) reactors. The method involves generating a spectral thermal radiation transport model of an RTP reactor. The method also involves specifying a desired wafer time dependent temperature profile. The method further involves calculating an inverse of the generated model using the desired wafer time dependent temperature to determine heating element parameters required to produce the desired profile. The method also involves controlling the heating elements of the RTP reactor in accordance with the heating element parameters to heat the wafer in accordance with the desired profile.

  15. A New Hybrid Spatio-temporal Model for Estimating Daily Multi-year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    NASA Technical Reports Server (NTRS)

    Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2014-01-01

    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.

  16. A New Hybrid Spatio-Temporal Model For Estimating Daily Multi-Year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data.

    PubMed

    Kloog, Itai; Chudnovsky, Alexandra A; Just, Allan C; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2014-10-01

    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM 2.5 ) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM 2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM 2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R 2 =0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R 2 =0.87, R 2 =0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.

  17. A New Hybrid Spatio-Temporal Model For Estimating Daily Multi-Year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    PubMed Central

    Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2017-01-01

    Background The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. Methods We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003–2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Results Our model performance was excellent (mean out-of-sample R2=0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R2=0.87, R2=0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Conclusion Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region. PMID:28966552

  18. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  19. Prediction models for CO2 emission in Malaysia using best subsets regression and multi-linear regression

    NASA Astrophysics Data System (ADS)

    Tan, C. H.; Matjafri, M. Z.; Lim, H. S.

    2015-10-01

    This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.

  20. DemQSAR: predicting human volume of distribution and clearance of drugs

    NASA Astrophysics Data System (ADS)

    Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter

    2011-12-01

    In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VDss) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VDss and CL is available on the webpage of DemPRED: http://agknapp.chemie.fu-berlin.de/dempred/.

  1. DemQSAR: predicting human volume of distribution and clearance of drugs.

    PubMed

    Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter

    2011-12-01

    In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VD(ss)) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VD(ss) and CL is available on the webpage of DemPRED: http://agknapp.chemie.fu-berlin.de/dempred/ .

  2. Bubble generation during transformer overload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oommen, T.V.

    1990-03-01

    Bubble generation in transformers has been demonstrated under certain overload conditions. The release of large quantities of bubbles would pose a dielectric breakdown hazard. A bubble prediction model developed under EPRI Project 1289-4 attempts to predict the bubble evolution temperature under different overload conditions. This report details a verification study undertaken to confirm the validity of the above model using coil structures subjected to overload conditions. The test variables included moisture in paper insulation, gas content in oil, and the type of oil preservation system. Two aged coils were also tested. The results indicated that the observed bubble temperatures weremore » close to the predicted temperatures for models with low initial gas content in the oil. The predicted temperatures were significantly lower than the observed temperatures for models with high gas content. Some explanations are provided for the anomalous behavior at high gas levels in oil. It is suggested that the dissolved gas content is not a significant factor in bubble evolution. The dominant factor in bubble evolution appears to be the water vapor pressure which must reach critical levels before bubbles can be released. Further study is needed to make a meaningful revision of the bubble prediction model. 8 refs., 13 figs., 11 tabs.« less

  3. Statistical physics of interacting neural networks

    NASA Astrophysics Data System (ADS)

    Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido

    2001-12-01

    Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.

  4. Further experimentation on bubble generation during transformer overload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oommen, T.V.

    1992-03-01

    This report covers additional work done during 1990 and 1991 on gas bubble generation under overload conditions. To improve visual bubble detection, a single disc coil was used. To further improve detection, a corona device was also used which signaled the onset of corona activity in the early stages of bubble formation. A total of fourteen model tests were conducted, half of which used the Inertaire system, and the remaining, a conservator (COPS). Moisture content of paper in the coil varied from 1.0% to 8.0%; gas (nitrogen) content varied from 1.0% to 8.8%. The results confirmed earlier observations that themore » mathematical bubble prediction model was not valid for high gas content model with relatively low moisture levels in the coil. An empirical relationship was formulated to accurately predict bubble evolution temperatures from known moisture and gas content values. For low moisture content models (below 2%), the simple Piper relationship was sufficient to predict bubble evolution temperatures, regardless of gas content. Moisture in the coil appears to be the key factor in bubble generation. Gas blanketed (Inertaire) systems do not appear to be prone to premature bubble generation from overloads as previously thought. The new bubble prediction model reveals that for a coil with 2% moisture, the bubble evolution temperature would be about 140{degrees}C. Since old transformers in service may have as much as 2% moisture in paper, the 140{degrees}C bubble evolution temperature may be taken as the lower limit of bubble evolution temperature under overload conditions for operating transformers. Drier insulation would raise the bubble evolution temperature.« less

  5. Energy Economics of Farm Biogas in Cold Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pillay, Pragasen; Grimberg, Stefan; Powers, Susan E

    Anaerobic digestion of farm and dairy waste has been shown to be capital intensive. One way to improve digester economics is to co-digest high-energy substrates together with the dairy manure. Cheese whey for example represents a high-energy substrate that is generated during cheese manufacture. There are currently no quantitative tools available that predict performance of co-digestion farm systems. The goal of this project was to develop a mathematical tool that would (1) predict the impact of co-digestion and (2) determine the best use of the generated biogas for a cheese manufacturing plant. Two models were developed that separately could bemore » used to meet both goals of the project. Given current pricing structures of the most economical use of the generated biogas at the cheese manufacturing plant was as a replacement of fuel oil to generate heat. The developed digester model accurately predicted the performance of 26 farm digesters operating in the North Eastern U.S.« less

  6. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  7. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  8. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  9. Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. W.; Hood, Raleigh R.; Long, Wen

    The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less

  10. Predictive modeling of infrared detectors and material systems

    NASA Astrophysics Data System (ADS)

    Pinkie, Benjamin

    Detectors sensitive to thermal and reflected infrared radiation are widely used for night-vision, communications, thermography, and object tracking among other military, industrial, and commercial applications. System requirements for the next generation of ultra-high-performance infrared detectors call for increased functionality such as large formats (> 4K HD) with wide field-of-view, multispectral sensitivity, and on-chip processing. Due to the low yield of infrared material processing, the development of these next-generation technologies has become prohibitively costly and time consuming. In this work, it will be shown that physics-based numerical models can be applied to predictively simulate infrared detector arrays of current technological interest. The models can be used to a priori estimate detector characteristics, intelligently design detector architectures, and assist in the analysis and interpretation of existing systems. This dissertation develops a multi-scale simulation model which evaluates the physics of infrared systems from the atomic (material properties and electronic structure) to systems level (modulation transfer function, dense array effects). The framework is used to determine the electronic structure of several infrared materials, optimize the design of a two-color back-to-back HgCdTe photodiode, investigate a predicted failure mechanism for next-generation arrays, and predict the systems-level measurables of a number of detector architectures.

  11. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  12. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  13. RECURSIVE PROTEIN MODELING: A DIVIDE AND CONQUER STRATEGY FOR PROTEIN STRUCTURE PREDICTION AND ITS CASE STUDY IN CASP9

    PubMed Central

    CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN

    2013-01-01

    After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379

  14. Chemically Aware Model Builder (camb): an R package for property and bioactivity modelling of small molecules.

    PubMed

    Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C

    2015-01-01

    In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.

  15. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamant, A; Ybarra, N; Seuntjens, J

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible to accurately model the prognosis of an individual lung SBRT patient’s treatment.« less

  16. On the virtues of automated quantitative structure-activity relationship: the new kid on the block.

    PubMed

    de Oliveira, Marcelo T; Katekawa, Edson

    2018-02-01

    Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.

  17. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  18. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  19. Patterns of waste generation: A gradient boosting model for short-term waste prediction in New York City.

    PubMed

    Johnson, Nicholas E; Ianiuk, Olga; Cazap, Daniel; Liu, Linglan; Starobin, Daniel; Dobler, Gregory; Ghandehari, Masoud

    2017-04-01

    Historical municipal solid waste (MSW) collection data supplied by the New York City Department of Sanitation (DSNY) was used in conjunction with other datasets related to New York City to forecast municipal solid waste generation across the city. Spatiotemporal tonnage data from the DSNY was combined with external data sets, including the Longitudinal Employer Household Dynamics data, the American Community Survey, the New York City Department of Finance's Primary Land Use and Tax Lot Output data, and historical weather data to build a Gradient Boosting Regression Model. The model was trained on historical data from 2005 to 2011 and validation was performed both temporally and spatially. With this model, we are able to accurately (R2>0.88) forecast weekly MSW generation tonnages for each of the 232 geographic sections in NYC across three waste streams of refuse, paper and metal/glass/plastic. Importantly, the model identifies regularity of urban waste generation and is also able to capture very short timescale fluctuations associated to holidays, special events, seasonal variations, and weather related events. This research shows New York City's waste generation trends and the importance of comprehensive data collection (especially weather patterns) in order to accurately predict waste generation. Copyright © 2017. Published by Elsevier Ltd.

  20. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  1. Selenide isotope generator for the Galileo mission. SIG/Galileo contract compliance power prediction technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, T.E.; Srinivas, V.

    1978-11-01

    This initial definition of the power degradation prediction technique outlines a model for predicting SIG/Galileo mean EOM power using component test data and data from a module power degradation demonstration test program. (LCL)

  2. Modelling the effect of structural QSAR parameters on skin penetration using genetic programming

    NASA Astrophysics Data System (ADS)

    Chung, K. K.; Do, D. Q.

    2010-09-01

    In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.

  3. Development of numerical model for predicting heat generation and temperatures in MSW landfills.

    PubMed

    Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A

    2013-10-01

    A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Improved Decadal Climate Prediction in the North Atlantic using EnOI-Assimilated Initial Condition

    NASA Astrophysics Data System (ADS)

    Li, Q.; Xin, X.; Wei, M.; Zhou, W.

    2017-12-01

    Decadal prediction experiments of Beijing Climate Center climate system model version 1.1(BCC-CSM1.1) participated in Coupled Model Intercomparison Project Phase 5 (CMIP5) had poor skill in extratropics of the North Atlantic, the initialization of which was done by relaxing modeled ocean temperature to the Simple Ocean Data Assimilation (SODA) reanalysis data. This study aims to improve the prediction skill of this model by using the assimilation technique in the initialization. New ocean data are firstly generated by assimilating the sea surface temperature (SST) of the Hadley Centre Sea Ice and Sea Surface Temperature (HadISST) dataset to the ocean model of BCC-CSM1.1 via Ensemble Optimum Interpolation (EnOI). Then a suite of decadal re-forecasts launched annually over the period 1961-2005 is carried out with simulated ocean temperature restored to the assimilated ocean data. Comparisons between the re-forecasts and previous CMIP5 forecasts show that the re-forecasts are more skillful in mid-to-high latitude SST of the North Atlantic. Improved prediction skill is also found for the Atlantic multi-decadal Oscillation (AMO), which is consistent with the better skill of Atlantic meridional overturning circulation (AMOC) predicted by the re-forecasts. We conclude that the EnOI assimilation generates better ocean data than the SODA reanalysis for initializing decadal climate prediction of BCC-CSM1.1 model.

  5. A unified internal model theory to resolve the paradox of active versus passive self-motion sensation

    PubMed Central

    Angelaki, Dora E

    2017-01-01

    Brainstem and cerebellar neurons implement an internal model to accurately estimate self-motion during externally generated (‘passive’) movements. However, these neurons show reduced responses during self-generated (‘active’) movements, indicating that predicted sensory consequences of motor commands cancel sensory signals. Remarkably, the computational processes underlying sensory prediction during active motion and their relationship to internal model computations during passive movements remain unknown. We construct a Kalman filter that incorporates motor commands into a previously established model of optimal passive self-motion estimation. The simulated sensory error and feedback signals match experimentally measured neuronal responses during active and passive head and trunk rotations and translations. We conclude that a single sensory internal model can combine motor commands with vestibular and proprioceptive signals optimally. Thus, although neurons carrying sensory prediction error or feedback signals show attenuated modulation, the sensory cues and internal model are both engaged and critically important for accurate self-motion estimation during active head movements. PMID:29043978

  6. A mechanistic model for mercury capture with in situ-generated titania particles: role of water vapor.

    PubMed

    Rodríguez, Sylian; Almquist, Catherine; Lee, Tai Gyu; Furuuchi, Masami; Hedrick, Elizabeth; Biswas, Pratim

    2004-02-01

    A mechanistic model to predict the capture of gas-phase mercury (Hg) species using in situ-generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model for photochemical reactions by Almquist and Biswas that accounts for the rates of electron-hole pair generation, the adsorption of the compound to be oxidized, and the adsorption of water vapor. The role of water vapor in the removal efficiency of Hg was investigated to evaluate the rates of Hg oxidation at different water vapor concentrations. As the water vapor concentration is increased, more hydroxy radical species are generated on the surface of the titania particle, increasing the number of active sites for the photooxidation and capture of Hg. At very high water vapor concentrations, competitive adsorption is expected to be important and reduce the number of sites available for photooxidation of Hg. The predictions of the developed phenomenological model agreed well with the measured Hg oxidation rates in this study and with the data on oxidation of organic compounds reported in the literature.

  7. Active inference, communication and hermeneutics☆

    PubMed Central

    Friston, Karl J.; Frith, Christopher D.

    2015-01-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007

  8. Active inference, communication and hermeneutics.

    PubMed

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  10. Shuttle data book: SRM fragment velocity model. Presented to the SRB Fragment Model Review Panel

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This study was undertaken to determine the velocity of fragments generated by the range safety destruction (RSD) or random failure of a Space Transportation System (STS) Solid Rocket Motor (SRM). The specific requirement was to provide a fragment model for use in those Galileo and Ulysses RTG safety analyses concerned with possible fragment impact on the spacecraft radioisotope thermoelectric generators (RTGS). Good agreement was obtained between predictions and observations for fragment velocity, velocity distributions, azimuths, and rotation rates. Based on this agreement with the entire data base, the model was used to predict the probable fragment environments which would occur in the event of an STS-SRM RSD or randon failure at 10, 74, 84 and 110 seconds. The results of these predictions are the basis of the fragment environments presented in the Shuttle Data Book (NSTS-08116). The information presented here is in viewgraph form.

  11. Forward modelling requires intention recognition and non-impoverished predictions.

    PubMed

    de Ruiter, Jan P; Cummins, Chris

    2013-08-01

    We encourage Pickering & Garrod (P&G) to implement this promising theory in a computational model. The proposed theory crucially relies on having an efficient and reliable mechanism for early intention recognition. Furthermore, the generation of impoverished predictions is incompatible with a number of key phenomena that motivated P&G's theory. Explaining these phenomena requires fully specified perceptual predictions in both comprehension and production.

  12. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.

    PubMed

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com .

  13. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models

    NASA Astrophysics Data System (ADS)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com.

  14. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  15. The prediction of human skin responses by using the combined in vitro fluorescein leakage/Alamar Blue (resazurin) assay.

    PubMed

    Clothier, Richard; Starzec, Gemma; Pradel, Lionel; Baxter, Victoria; Jones, Melanie; Cox, Helen; Noble, Linda

    2002-01-01

    A range of cosmetics formulations with human patch-test data were supplied in a coded form, for the examination of the use of a combined in vitro permeability barrier assay and cell viability assay to generate, and then test, a prediction model for assessing potential human skin patch-test results. The target cells employed were of the Madin Darby canine kidney cell line, which establish tight junctions and adherens junctions able to restrict the permeability of sodium fluorescein across the barrier of the confluent cell layer. The prediction model for interpretation of the in vitro assay results included initial effects and the recovery profile over 72 hours. A set of the hand-wash, surfactant-based formulations were tested to generate the prediction model, and then six others were evaluated. The model system was then also evaluated with powder laundry detergents and hand moisturisers: their effects were predicted by the in vitro test system. The model was under-predictive for two of the ten hand-wash products. It was over-predictive for the moisturisers, (two out of six) and eight out of ten laundry powders. However, the in vivo human patch test data were variable, and 19 of the 26 predictions were correct or within 0.5 on the 0-4.0 scale used for the in vivo scores, i.e. within the same variable range reported for the repeat-test hand-wash in vivo data.

  16. The natural mathematics of behavior analysis.

    PubMed

    Li, Don; Hautus, Michael J; Elliffe, Douglas

    2018-04-19

    Models that generate event records have very general scope regarding the dimensions of the target behavior that we measure. From a set of predicted event records, we can generate predictions for any dependent variable that we could compute from the event records of our subjects. In this sense, models that generate event records permit us a freely multivariate analysis. To explore this proposition, we conducted a multivariate examination of Catania's Operant Reserve on single VI schedules in transition using a Markov Chain Monte Carlo scheme for Approximate Bayesian Computation. Although we found systematic deviations between our implementation of Catania's Operant Reserve and our observed data (e.g., mismatches in the shape of the interresponse time distributions), the general approach that we have demonstrated represents an avenue for modelling behavior that transcends the typical constraints of algebraic models. © 2018 Society for the Experimental Analysis of Behavior.

  17. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  18. Cascade aeroacoustics including steady loading effects

    NASA Astrophysics Data System (ADS)

    Chiang, Hsiao-Wei D.; Fleeter, Sanford

    A mathematical model is developed to analyze the effects of airfoil and cascade geometry, steady aerodynamic loading, and the characteristics of the unsteady flow field on the discrete frequency noise generation of a blade row in an incompressible flow. The unsteady lift which generates the noise is predicted with a complex first-order cascade convected gust analysis. This model was then applied to the Gostelow airfoil cascade and variations, demonstrating that steady loading, cascade solidity, and the gust direction are significant. Also, even at zero incidence, the classical flat plate cascade predictions are unacceptable.

  19. Modeling protein complexes with BiGGER.

    PubMed

    Krippahl, Ludwig; Moura, José J; Palma, P Nuno

    2003-07-01

    This article describes the method and results of our participation in the Critical Assessment of PRediction of Interactions (CAPRI) experiment, using the protein docking program BiGGER (Bimolecular complex Generation with Global Evaluation and Ranking) (Palma et al., Proteins 2000;39:372-384). Of five target complexes (CAPRI targets 2, 4, 5, 6, and 7), only one was successfully predicted (target 6), but BiGGER generated reasonable models for targets 4, 5, and 7, which could have been identified if additional biochemical information had been available. Copyright 2003 Wiley-Liss, Inc.

  20. A study of sound generation in subsonic rotors, volume 1

    NASA Technical Reports Server (NTRS)

    Chalupnik, J. D.; Clark, L. T.

    1975-01-01

    A model for the prediction of wake related sound generation by a single airfoil is presented. It is assumed that the net force fluctuation on an airfoil may be expressed in terms of the net momentum fluctuation in the near wake of the airfoil. The forcing function for sound generation depends on the spectra of the two point velocity correlations in the turbulent region near the airfoil trailing edge. The spectra of the two point velocity correlations were measured for the longitudinal and transverse components of turbulence in the wake of a 91.4 cm chord airfoil. A scaling procedure was developed using the turbulent boundary layer thickness. The model was then used to predict the radiated sound from a 5.1 cm chord airfoil. Agreement between the predicted and measured sound radiation spectra was good. The single airfoil results were extended to a rotor geometry, and various aerodynamic parameters were studied.

  1. An Optimization-Based System Model of Disturbance-Generated Forest Biomass Utilization

    ERIC Educational Resources Information Center

    Curry, Guy L.; Coulson, Robert N.; Gan, Jianbang; Tchakerian, Maria D.; Smith, C. Tattersall

    2008-01-01

    Disturbance-generated biomass results from endogenous and exogenous natural and cultural disturbances that affect the health and productivity of forest ecosystems. These disturbances can create large quantities of plant biomass on predictable cycles. A systems analysis model has been developed to quantify aspects of system capacities (harvest,…

  2. Chronic contamination decreases disease spread: a Daphnia–fungus–copper case study

    PubMed Central

    Civitello, David J.; Forys, Philip; Johnson, Adam P.; Hall, Spencer R.

    2012-01-01

    Chemical contamination and disease outbreaks have increased in many ecosystems. However, connecting pollution to disease spread remains difficult, in part, because contaminants can simultaneously exert direct and multi-generational effects on several host and parasite traits. To address these challenges, we parametrized a model using a zooplankton–fungus–copper system. In individual-level assays, we considered three sublethal contamination scenarios: no contamination, single-generation contamination (hosts and parasites exposed only during the assays) and multi-generational contamination (hosts and parasites exposed for several generations prior to and during the assays). Contamination boosted transmission by increasing contact of hosts with parasites. However, it diminished parasite reproduction by reducing the size and lifespan of infected hosts. Multi-generational contamination further reduced parasite reproduction. The parametrized model predicted that a single generation of contamination would enhance disease spread (via enhanced transmission), whereas multi-generational contamination would inhibit epidemics relative to unpolluted conditions (through greatly depressed parasite reproduction). In a population-level experiment, multi-generational contamination reduced the size of experimental epidemics but did not affect Daphnia populations without disease. This result highlights the importance of multi-generational effects for disease dynamics. Such integration of models with experiments can provide predictive power for disease problems in contaminated environments. PMID:22593104

  3. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these suggestions should help lower the probability of erroneous learning in mallard ABM and adaptive management in general.

  4. Towards the Next Generation of Space Environment Prediction Capabilities.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M. M.

    2015-12-01

    Since its establishment more than 15 years ago, the Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) is serving as an assess point to expanding collection of state-of-the-art space environment models and frameworks as well as a hub for collaborative development of next generation space weather forecasting systems. In partnership with model developers and international research and operational communities the CCMC integrates new data streams and models from diverse sources into end-to-end space weather impacts predictive systems, identifies week links in data-model & model-model coupling and leads community efforts to fill those gaps. The presentation will highlight latest developments, progress in CCMC-led community-wide projects on testing, prototyping, and validation of models, forecasting techniques and procedures and outline ideas on accelerating implementation of new capabilities in space weather operations.

  5. Urban RoGeR: Merging process-based high-resolution flash flood model for urban areas with long-term water balance predictions

    NASA Astrophysics Data System (ADS)

    Weiler, M.

    2016-12-01

    Heavy rain induced flash floods are still a serious hazard and generate high damages in urban areas. In particular in the spatially complex urban areas, the temporal and spatial pattern of runoff generation processes at a wide spatial range during extreme rainfall events need to be predicted including the specific effects of green infrastructure and urban forests. In addition, the initial conditions (soil moisture pattern, water storage of green infrastructure) and the effect of lateral redistribution of water (run-on effects and re-infiltration) have to be included in order realistically predict flash flood generation. We further developed the distributed, process-based model RoGeR (Runoff Generation Research) to include the relevant features and processes in urban areas in order to test the effects of different settings, initial conditions and the lateral redistribution of water on the predicted flood response. The uncalibrated model RoGeR runs at a spatial resolution of 1*1m² (LiDAR, degree of sealing, landuse), soil properties and geology (1:50.000). In addition, different green infrastructures are included into the model as well as the effect of trees on interception and transpiration. A hydraulic model was included into RoGeR to predict surface runoff, water redistribution, and re-infiltration. During rainfall events, RoGeR predicts at 5 min temporal resolution, but the model also simulates evapotranspiration and groundwater recharge during rain-free periods at a longer time step. The model framework was applied to several case studies in Germany where intense rainfall events produced flash floods causing high damage in urban areas and to a long-term research catchment in an urban setting (Vauban, Freiburg), where a variety of green infrastructures dominates the hydrology. Urban-RoGeR allowed us to study the effects of different green infrastructures on reducing the flood peak, but also its effect on the water balance (evapotranspiration and groundwater recharge). We could also show that infiltration of surface runoff from areas with a low infiltration (lateral redistribution) reduce the flood peaks by over 90% in certain areas and situations. Finally, we also evaluated the model to long-term runoff observations (surface runoff, ET, roof runoff) and to flood marks in the selected case studies.

  6. Predictive Modeling of Rice Yellow Stem Borer Population Dynamics under Climate Change Scenarios in Indramayu

    NASA Astrophysics Data System (ADS)

    Nurhayati, E.; Koesmaryono, Y.; Impron

    2017-03-01

    Rice Yellow Stem Borer (YSB) is one of the major insect pests in rice plants that has high attack intensity in rice production center areas, especially in West Java. This pest is consider as holometabola insects that causes rice damage in the vegetative phase (deadheart) as well as generative phase (whitehead). Climatic factor is one of the environmental factors influence the pattern of dynamics population. The purpose of this study was to develop a predictive modeling of YSB pest dynamics population under climate change scenarios (2016-2035 period) using Dymex Model in Indramayu area, West Java. YSB modeling required two main components, namely climate parameters and YSB development lower threshold of temperature (To) to describe YSB life cycle in every phase. Calibration and validation test of models showed the coefficient of determination (R2) between the predicted results and observations of the study area were 0.74 and 0.88 respectively, which was able to illustrate the development, mortality, transfer of individuals from one stage to the next life also fecundity and YSB reproduction. On baseline climate condition, there was a tendency of population abundance peak (outbreak) occured when a change of rainfall intensity in the rainy season transition to dry season or the opposite conditions was happen. In both of application of climate change scenarios, the model outputs were generated well and able to predict the pattern of YSB population dynamics with a the increasing trend of specific population numbers, generation numbers per season and also shifting pattern of populations abundance peak in the future climatic conditions. These results can be adopted as a tool to predict outbreak and to give early warning to control YSB pest more effectively.

  7. Generation, Analysis and Characterization of Anisotropic Engineered Meta Materials

    NASA Astrophysics Data System (ADS)

    Trifale, Ninad T.

    A methodology for a systematic generation of highly anisotropic micro-lattice structures was investigated. Multiple algorithms for generation and validation of engineered structures are developed and evaluated. Set of all possible permutations of structures for an 8-node cubic unit cell were considered and the degree of anisotropy of meta-properties in heat transport and mechanical elasticity were evaluated. Feasibility checks were performed to ensure that the generated unit cell network was repeatable and a continuous lattice structure. Four different strategies for generating permutations of the structures are discussed. Analytical models were developed to predict effective thermal, mechanical and permeability characteristics of these cellular structures.Experimentation and numerical modeling techniques were used to validate the models that are developed. A self-consistent mechanical elasticity model was developed which connects the meso-scale properties to stiffness of individual struts. A three dimensional thermal resistance network analogy was used to evaluate the effective thermal conductivity of the structures. The struts were modeled as a network of one dimensional thermal resistive elements and effective conductivity evaluated. Models were validated against numerical simulations and experimental measurements on 3D printed samples. Model was developed to predict effective permeability of these engineered structures based on Darcy's law. Drag coefficients were evaluated for individual connections in transverse and longitudinal directions and an interaction term was calibrated from the experimental data in literature in order to predict permeability. Generic optimization framework coupled to finite element solver is developed for analyzing any application involving use of porous structures. An objective functions were generated structure to address frequently observed trade-off between the stiffness, thermal conductivity, permeability and porosity. Three application were analyzed for potential use of engineered materials. Heat spreader application involving thermal and mechanical constraints, artificial bone grafts application involving mechanical and permeability constraints and structural materials applications involving mechanical, thermal and porosity constraints is analyzed. Recommendations for optimum topologies for specific operating conditions are provided.

  8. Finite difference time domain grid generation from AMC helicopter models

    NASA Technical Reports Server (NTRS)

    Cravey, Robin L.

    1992-01-01

    A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.

  9. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  10. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  11. Impact of relationships between test and training animals and among training animals on reliability of genomic prediction.

    PubMed

    Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G

    2015-10-01

    One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.

  12. Indexing sensory plasticity: Evidence for distinct Predictive Coding and Hebbian learning mechanisms in the cerebral cortex.

    PubMed

    Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D

    2018-04-30

    The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  14. Further experimentation on bubble generation during transformer overload. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oommen, T.V.

    1992-03-01

    This report covers additional work done during 1990 and 1991 on gas bubble generation under overload conditions. To improve visual bubble detection, a single disc coil was used. To further improve detection, a corona device was also used which signaled the onset of corona activity in the early stages of bubble formation. A total of fourteen model tests were conducted, half of which used the Inertaire system, and the remaining, a conservator (COPS). Moisture content of paper in the coil varied from 1.0% to 8.0%; gas (nitrogen) content varied from 1.0% to 8.8%. The results confirmed earlier observations that themore » mathematical bubble prediction model was not valid for high gas content model with relatively low moisture levels in the coil. An empirical relationship was formulated to accurately predict bubble evolution temperatures from known moisture and gas content values. For low moisture content models (below 2%), the simple Piper relationship was sufficient to predict bubble evolution temperatures, regardless of gas content. Moisture in the coil appears to be the key factor in bubble generation. Gas blanketed (Inertaire) systems do not appear to be prone to premature bubble generation from overloads as previously thought. The new bubble prediction model reveals that for a coil with 2% moisture, the bubble evolution temperature would be about 140{degrees}C. Since old transformers in service may have as much as 2% moisture in paper, the 140{degrees}C bubble evolution temperature may be taken as the lower limit of bubble evolution temperature under overload conditions for operating transformers. Drier insulation would raise the bubble evolution temperature.« less

  15. Real-time stylistic prediction for whole-body human motions.

    PubMed

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Development of a fourth generation predictive capability maturity model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, themore » PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.« less

  17. Protein Structure and Function Prediction Using I-TASSER

    PubMed Central

    Yang, Jianyi; Zhang, Yang

    2016-01-01

    I-TASSER is a hierarchical protocol for automated protein structure prediction and structure-based function annotation. Starting from the amino acid sequence of target proteins, I-TASSER first generates full-length atomic structural models from multiple threading alignments and iterative structural assembly simulations followed by atomic-level structure refinement. The biological functions of the protein, including ligand-binding sites, enzyme commission number, and gene ontology terms, are then inferred from known protein function databases based on sequence and structure profile comparisons. I-TASSER is freely available as both an on-line server and a stand-alone package. This unit describes how to use the I-TASSER protocol to generate structure and function prediction and how to interpret the prediction results, as well as alternative approaches for further improving the I-TASSER modeling quality for distant-homologous and multi-domain protein targets. PMID:26678386

  18. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  19. Development and application of Geobacillus stearothermophilus growth model for predicting spoilage of evaporated milk.

    PubMed

    Kakagianni, Myrsini; Gougouli, Maria; Koutsoumanis, Konstantinos P

    2016-08-01

    The presence of Geobacillus stearothermophilus spores in evaporated milk constitutes an important quality problem for the milk industry. This study was undertaken to provide an approach in modelling the effect of temperature on G. stearothermophilus ATCC 7953 growth and in predicting spoilage of evaporated milk. The growth of G. stearothermophilus was monitored in tryptone soy broth at isothermal conditions (35-67 °C). The data derived were used to model the effect of temperature on G. stearothermophilus growth with a cardinal type model. The cardinal values of the model for the maximum specific growth rate were Tmin = 33.76 °C, Tmax = 68.14 °C, Topt = 61.82 °C and μopt = 2.068/h. The growth of G. stearothermophilus was assessed in evaporated milk at Topt in order to adjust the model to milk. The efficiency of the model in predicting G. stearothermophilus growth at non-isothermal conditions was evaluated by comparing predictions with observed growth under dynamic conditions and the results showed a good performance of the model. The model was further used to predict the time-to-spoilage (tts) of evaporated milk. The spoilage of this product caused by acid coagulation when the pH approached a level around 5.2, eight generations after G. stearothermophilus reached the maximum population density (Nmax). Based on the above, the tts was predicted from the growth model as the sum of the time required for the microorganism to multiply from the initial to the maximum level ( [Formula: see text] ), plus the time required after the [Formula: see text] to complete eight generations. The observed tts was very close to the predicted one indicating that the model is able to describe satisfactorily the growth of G. stearothermophilus and to provide realistic predictions for evaporated milk spoilage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Molecular Docking for Prediction and Interpretation of Adverse Drug Reactions.

    PubMed

    Luo, Heng; Fokoue-Nkoutche, Achille; Singh, Nalini; Yang, Lun; Hu, Jianying; Zhang, Ping

    2018-05-23

    Adverse drug reactions (ADRs) present a major burden for patients and the healthcare industry. Various computational methods have been developed to predict ADRs for drug molecules. However, many of these methods require experimental or surveillance data and cannot be used when only structural information is available. We collected 1,231 small molecule drugs and 600 human proteins and utilized molecular docking to generate binding features among them. We developed machine learning models that use these docking features to make predictions for 1,533 ADRs. These models obtain an overall area under the receiver operating characteristic curve (AUROC) of 0.843 and an overall area under the precision-recall curve (AUPR) of 0.395, outperforming seven structural fingerprint-based prediction models. Using the method, we predicted skin striae for fluticasone propionate, dermatitis acneiform for mometasone, and decreased libido for irinotecan, as demonstrations. Furthermore, we analyzed the top binding proteins associated with some of the ADRs, which can help to understand and/or generate hypotheses for underlying mechanisms of ADRs. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Analysis and modelling of septic shock microarray data using Singular Value Decomposition.

    PubMed

    Allanki, Srinivas; Dixit, Madhulika; Thangaraj, Paul; Sinha, Nandan Kumar

    2017-06-01

    Being a high throughput technique, enormous amounts of microarray data has been generated and there arises a need for more efficient techniques of analysis, in terms of speed and accuracy. Finding the differentially expressed genes based on just fold change and p-value might not extract all the vital biological signals that occur at a lower gene expression level. Besides this, numerous mathematical models have been generated to predict the clinical outcome from microarray data, while very few, if not none, aim at predicting the vital genes that are important in a disease progression. Such models help a basic researcher narrow down and concentrate on a promising set of genes which leads to the discovery of gene-based therapies. In this article, as a first objective, we have used the lesser known and used Singular Value Decomposition (SVD) technique to build a microarray data analysis tool that works with gene expression patterns and intrinsic structure of the data in an unsupervised manner. We have re-analysed a microarray data over the clinical course of Septic shock from Cazalis et al. (2014) and have shown that our proposed analysis provides additional information compared to the conventional method. As a second objective, we developed a novel mathematical model that predicts a set of vital genes in the disease progression that works by generating samples in the continuum between health and disease, using a simple normal-distribution-based random number generator. We also verify that most of the predicted genes are indeed related to septic shock. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. NREL Projects Awarded More Than $3 Million to Advance Novel Solar

    Science.gov Websites

    in Grid Operations," evaluating a research solution to better integrate solar power generation funding program, which advances state-of-the-art techniques for predicting solar power generation to Office to advance predictive modeling of solar power as part of its Solar Forecasting 2 funding program

  3. Application of data mining to the analysis of meteorological data for air quality prediction: A case study in Shenyang

    NASA Astrophysics Data System (ADS)

    Zhao, Chang; Song, Guojun

    2017-08-01

    Air pollution is one of the important reasons for restricting the current economic development. PM2.5 which is a vital factor in the measurement of air pollution is defined as a kind of suspended particulate matter with its equivalent diameter less than 25μm, which may enter the alveoli and therefore make a great impact on the human body. Meteorological factors are also one of the main factors affecting the production of PM2.5, therefore, it is essential to establish the model between meteorological factors and PM2.5 for the prediction. Data mining is a promising approach to model PM2.5 change, Shenyang which is one of the most important industrial city in Northeast China with severe air pollutions is set as the case city. Meteorological data (wind direction, wind speed, temperature, humidity, rainfall, etc.) from 2013 to 2015 and PM2.5 concentration data are used for this prediction. As to the requirements of the World Health Organization (WHO), three data mining models, whereby the predictions of PM2.5 are directly generated by the meteorological data. After assessment, the random forest model is appeared to offer better prediction performance than the other two. At last, the accuracy of the generated models are analysed.

  4. Species-specific predictive models of developmental toxicity using the ToxCast chemical library

    EPA Science Inventory

    EPA’s ToxCastTM project is profiling the in vitro bioactivity of chemicals to generate predictive models that correlate with observed in vivo toxicity. In vitro profiling methods are based on ToxCast data, consisting of over 600 high-throughput screening (HTS) and high-content sc...

  5. Interior Noise Predictions in the Preliminary Design of the Large Civil Tiltrotor (LCTR2)

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Cabell, Randolph H.; Boyd, David D.

    2013-01-01

    A prediction scheme was established to compute sound pressure levels in the interior of a simplified cabin model of the second generation Large Civil Tiltrotor (LCTR2) during cruise conditions, while being excited by turbulent boundary layer flow over the fuselage, or by tiltrotor blade loading and thickness noise. Finite element models of the cabin structure, interior acoustic space, and acoustically absorbent (poro-elastic) materials in the fuselage were generated and combined into a coupled structural-acoustic model. Fluctuating power spectral densities were computed according to the Efimtsov turbulent boundary layer excitation model. Noise associated with the tiltrotor blades was predicted in the time domain as fluctuating surface pressures and converted to power spectral densities at the fuselage skin finite element nodes. A hybrid finite element (FE) approach was used to compute the low frequency acoustic cabin response over the frequency range 6-141 Hz with a 1 Hz bandwidth, and the Statistical Energy Analysis (SEA) approach was used to predict the interior noise for the 125-8000 Hz one-third octave bands.

  6. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  7. CABS-fold: Server for the de novo and consensus-based prediction of protein structure.

    PubMed

    Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej

    2013-07-01

    The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.

  8. CABS-fold: server for the de novo and consensus-based prediction of protein structure

    PubMed Central

    Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej

    2013-01-01

    The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950

  9. A study on predicting network corrections in PPP-RTK processing

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Khodabandeh, Amir; Teunissen, Peter

    2017-10-01

    In PPP-RTK processing, the network corrections including the satellite clocks, the satellite phase biases and the ionospheric delays are provided to the users to enable fast single-receiver integer ambiguity resolution. To solve the rank deficiencies in the undifferenced observation equations, the estimable parameters are formed to generate full-rank design matrix. In this contribution, we firstly discuss the interpretation of the estimable parameters without and with a dynamic satellite clock model incorporated in a Kalman filter during the network processing. The functionality of the dynamic satellite clock model is tested in the PPP-RTK processing. Due to the latency generated by the network processing and data transfer, the network corrections are delayed for the real-time user processing. To bridge the latencies, we discuss and compare two prediction approaches making use of the network corrections without and with the dynamic satellite clock model, respectively. The first prediction approach is based on the polynomial fitting of the estimated network parameters, while the second approach directly follows the dynamic model in the Kalman filter of the network processing and utilises the satellite clock drifts estimated in the network processing. Using 1 Hz data from two networks in Australia, the influences of the two prediction approaches on the user positioning results are analysed and compared for latencies ranging from 3 to 10 s. The accuracy of the positioning results decreases with the increasing latency of the network products. For a latency of 3 s, the RMS of the horizontal and the vertical coordinates (with respect to the ground truth) do not show large differences applying both prediction approaches. For a latency of 10 s, the prediction approach making use of the satellite clock model has generated slightly better positioning results with the differences of the RMS at mm-level. Further advantages and disadvantages of both prediction approaches are also discussed in this contribution.

  10. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  11. Modeling the growth of Listeria monocytogenes in mold-ripened cheeses.

    PubMed

    Lobacz, Adriana; Kowalik, Jaroslaw; Tarczynska, Anna

    2013-06-01

    This study presents possible applications of predictive microbiology to model the safety of mold-ripened cheeses with respect to bacteria of the species Listeria monocytogenes during (1) the ripening of Camembert cheese, (2) cold storage of Camembert cheese at temperatures ranging from 3 to 15°C, and (3) cold storage of blue cheese at temperatures ranging from 3 to 15°C. The primary models used in this study, such as the Baranyi model and modified Gompertz function, were fitted to growth curves. The Baranyi model yielded the most accurate goodness of fit and the growth rates generated by this model were used for secondary modeling (Ratkowsky simple square root and polynomial models). The polynomial model more accurately predicted the influence of temperature on the growth rate, reaching the adjusted coefficients of multiple determination 0.97 and 0.92 for Camembert and blue cheese, respectively. The observed growth rates of L. monocytogenes in mold-ripened cheeses were compared with simulations run with the Pathogen Modeling Program (PMP 7.0, USDA, Wyndmoor, PA) and ComBase Predictor (Institute of Food Research, Norwich, UK). However, the latter predictions proved to be consistently overestimated and contained a significant error level. In addition, a validation process using independent data generated in dairy products from the ComBase database (www.combase.cc) was performed. In conclusion, it was found that L. monocytogenes grows much faster in Camembert than in blue cheese. Both the Baranyi and Gompertz models described this phenomenon accurately, although the Baranyi model contained a smaller error. Secondary modeling and further validation of the generated models highlighted the issue of usability and applicability of predictive models in the food processing industry by elaborating models targeted at a specific product or a group of similar products. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Improving CSF biomarker accuracy in predicting prevalent and incident Alzheimer disease

    PubMed Central

    Fagan, A.M.; Williams, M.M.; Ghoshal, N.; Aeschleman, M.; Grant, E.A.; Marcus, D.S.; Mintun, M.A.; Holtzman, D.M.; Morris, J.C.

    2011-01-01

    Objective: To investigate factors, including cognitive and brain reserve, which may independently predict prevalent and incident dementia of the Alzheimer type (DAT) and to determine whether inclusion of identified factors increases the predictive accuracy of the CSF biomarkers Aβ42, tau, ptau181, tau/Aβ42, and ptau181/Aβ42. Methods: Logistic regression identified variables that predicted prevalent DAT when considered together with each CSF biomarker in a cross-sectional sample of 201 participants with normal cognition and 46 with DAT. The area under the receiver operating characteristic curve (AUC) from the resulting model was compared with the AUC generated using the biomarker alone. In a second sample with normal cognition at baseline and longitudinal data available (n = 213), Cox proportional hazards models identified variables that predicted incident DAT together with each biomarker, and the models' concordance probability estimate (CPE), which was compared to the CPE generated using the biomarker alone. Results: APOE genotype including an ε4 allele, male gender, and smaller normalized whole brain volumes (nWBV) were cross-sectionally associated with DAT when considered together with every biomarker. In the longitudinal sample (mean follow-up = 3.2 years), 14 participants (6.6%) developed DAT. Older age predicted a faster time to DAT in every model, and greater education predicted a slower time in 4 of 5 models. Inclusion of ancillary variables resulted in better cross-sectional prediction of DAT for all biomarkers (p < 0.0021), and better longitudinal prediction for 4 of 5 biomarkers (p < 0.0022). Conclusions: The predictive accuracy of CSF biomarkers is improved by including age, education, and nWBV in analyses. PMID:21228296

  13. Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.

    PubMed

    Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F

    2017-10-01

    Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  14. Review and evaluation of recent developments in melic inlet dynamic flow distortion prediction and computer program documentation and user's manual estimating maximum instantaneous inlet flow distortion from steady-state total pressure measurements with full, limited, or no dynamic data

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Dennon, S. R.

    1986-01-01

    A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.

  15. Highway traffic noise prediction based on GIS

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghua; Qin, Qiming

    2014-05-01

    Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.

  16. Counteracting Obstacles with Optimistic Predictions

    ERIC Educational Resources Information Center

    Zhang, Ying; Fishbach, Ayelet

    2010-01-01

    This research tested for counteractive optimism: a self-control strategy of generating optimistic predictions of future goal attainment in order to overcome anticipated obstacles in goal pursuit. In support of the counteractive optimism model, participants in 5 studies predicted better performance, more time invested in goal activities, and lower…

  17. Georgia Tech Vertical Lift Research Center of Excellence

    DTIC Science & Technology

    2017-12-14

    Technical Project Summaries Task 1.1 (GT-1): Next Generation VABS for More Realistic Modeling of Composite Blades ...Methodology for the Prediction of Rotor Blade Ice Formation and Shedding ..................................................................... 20...software disclosures and technology transfer efforts. Task 1.1 (GT-1): Next Generation VABS for More Realistic Modeling of Composite Blades PIs

  18. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614

  19. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    PubMed

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  20. Utilizing high throughput screening data for predictive toxicology models: protocols and application to MLSCN assays

    NASA Astrophysics Data System (ADS)

    Guha, Rajarshi; Schürer, Stephan C.

    2008-06-01

    Computational toxicology is emerging as an encouraging alternative to experimental testing. The Molecular Libraries Screening Center Network (MLSCN) as part of the NIH Molecular Libraries Roadmap has recently started generating large and diverse screening datasets, which are publicly available in PubChem. In this report, we investigate various aspects of developing computational models to predict cell toxicity based on cell proliferation screening data generated in the MLSCN. By capturing feature-based information in those datasets, such predictive models would be useful in evaluating cell-based screening results in general (for example from reporter assays) and could be used as an aid to identify and eliminate potentially undesired compounds. Specifically we present the results of random forest ensemble models developed using different cell proliferation datasets and highlight protocols to take into account their extremely imbalanced nature. Depending on the nature of the datasets and the descriptors employed we were able to achieve percentage correct classification rates between 70% and 85% on the prediction set, though the accuracy rate dropped significantly when the models were applied to in vivo data. In this context we also compare the MLSCN cell proliferation results with animal acute toxicity data to investigate to what extent animal toxicity can be correlated and potentially predicted by proliferation results. Finally, we present a visualization technique that allows one to compare a new dataset to the training set of the models to decide whether the new dataset may be reliably predicted.

  1. Mining data from CFD simulation for aneurysm and carotid bifurcation models.

    PubMed

    Miloš, Radović; Dejan, Petrović; Nenad, Filipović

    2011-01-01

    Arterial geometry variability is present both within and across individuals. To analyze the influence of geometric parameters, blood density, dynamic viscosity and blood velocity on wall shear stress (WSS) distribution in the human carotid artery bifurcation and aneurysm, the computer simulations were run to generate the data pertaining to this phenomenon. In our work we evaluate two prediction models for modeling these relationships: neural network model and k-nearest neighbor model. The results revealed that both models have high prediction ability for this prediction task. The achieved results represent progress in assessment of stroke risk for a given patient data in real time.

  2. The Role of Multimodel Combination in Improving Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Li, W.

    2008-12-01

    Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.

  3. Investigation of Next-Generation Earth Radiation Budget Radiometry

    NASA Technical Reports Server (NTRS)

    Coffey, Katherine L.; Mahan, J. R.

    1999-01-01

    The current effort addresses two issues important to the research conducted by the Thermal Radiation Group at Virginia Tech. The first research topic involves the development of a method which can properly model the diffraction of radiation as it enters an instrument aperture. The second topic involves the study of a potential next-generation space-borne radiometric instrument concept. Presented are multiple modeling efforts to describe the diffraction of monochromatic radiant energy passing through an aperture for use in the Monte-Carlo ray-trace environment. Described in detail is a deterministic model based upon Heisenberg's uncertainty principle and the particle theory of light. This method is applicable to either Fraunhofer or Fresnel diffraction situations, but is incapable of predicting the secondary fringes in a diffraction pattern. Also presented is a second diffraction model, based on the Huygens-Fresnel principle with a correcting obliquity factor. This model is useful for predicting Fraunhofer diffraction, and can predict the secondary fringes because it keeps track of phase. NASA is planning for the next-generation of instruments to follow CERES (Clouds and the Earth's Radiant Energy System), an instrument which measures components of the Earth's radiant energy budget in three spectral bands. A potential next-generation concept involves modification of the current CERES instrument to measure in a larger number of wavelength bands. This increased spectral partitioning would be achieved by the addition of filters and detectors to the current CERES geometry. The capacity of the CERES telescope to serve for this purpose is addressed in this thesis.

  4. Predicting synchrony in heterogeneous pulse coupled oscillators

    NASA Astrophysics Data System (ADS)

    Talathi, Sachin S.; Hwang, Dong-Uk; Miliotis, Abraham; Carney, Paul R.; Ditto, William L.

    2009-08-01

    Pulse coupled oscillators (PCOs) represent an ubiquitous model for a number of physical and biological systems. Phase response curves (PRCs) provide a general mathematical framework to analyze patterns of synchrony generated within these models. A general theoretical approach to account for the nonlinear contributions from higher-order PRCs in the generation of synchronous patterns by the PCOs is still lacking. Here, by considering a prototypical example of a PCO network, i.e., two synaptically coupled neurons, we present a general theory that extends beyond the weak-coupling approximation, to account for higher-order PRC corrections in the derivation of an approximate discrete map, the stable fixed point of which can predict the domain of 1:1 phase locked synchronous states generated by the PCO network.

  5. Generation Mechanism and Prediction Model for Low Frequency Noise Induced by Energy Dissipating Submerged Jets during Flood Discharge from a High Dam

    PubMed Central

    Lian, Jijian; Zhang, Wenjiao; Guo, Qizhong; Liu, Fang

    2016-01-01

    As flood water is discharged from a high dam, low frequency (i.e., lower than 10 Hz) noise (LFN) associated with air pulsation is generated and propagated in the surrounding areas, causing environmental problems such as vibrations of windows and doors and discomfort of residents and construction workers. To study the generation mechanisms and key influencing factors of LFN induced by energy dissipation through submerged jets at a high dam, detailed prototype observations and analyses of LFN are conducted. The discharge flow field is simulated using a gas-liquid turbulent flow model, and the vorticity fluctuation characteristics are then analyzed. The mathematical model for the LFN intensity is developed based on vortex sound theory and a turbulent flow model, verified by prototype observations. The model results reveal that the vorticity fluctuation in strong shear layers around the high-velocity submerged jets is highly correlated with the on-site LFN, and the strong shear layers are the main regions of acoustic source for the LFN. In addition, the predicted and observed magnitudes of LFN intensity agree quite well. This is the first time that the LFN intensity has been shown to be able to be predicted quantitatively. PMID:27314374

  6. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  7. Quantitative prediction of ionization effect on human skin permeability.

    PubMed

    Baba, Hiromi; Ueno, Yusuke; Hashida, Mitsuru; Yamashita, Fumiyoshi

    2017-04-30

    Although skin permeability of an active ingredient can be severely affected by its ionization in a dose solution, most of the existing prediction models cannot predict such impacts. To provide reliable predictors, we curated a novel large dataset of in vitro human skin permeability coefficients for 322 entries comprising chemically diverse permeants whose ionization fractions can be calculated. Subsequently, we generated thousands of computational descriptors, including LogD (octanol-water distribution coefficient at a specific pH), and analyzed the dataset using nonlinear support vector regression (SVR) and Gaussian process regression (GPR) combined with greedy descriptor selection. The SVR model was slightly superior to the GPR model, with externally validated squared correlation coefficient, root mean square error, and mean absolute error values of 0.94, 0.29, and 0.21, respectively. These models indicate that Log D is effective for a comprehensive prediction of ionization effects on skin permeability. In addition, the proposed models satisfied the statistical criteria endorsed in recent model validation studies. These models can evaluate virtually generated compounds at any pH; therefore, they can be used for high-throughput evaluations of numerous active ingredients and optimization of their skin permeability with respect to permeant ionization. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A seismoacoustic study of the 2011 January 3 Circleville earthquake

    NASA Astrophysics Data System (ADS)

    Arrowsmith, Stephen J.; Burlacu, Relu; Pankow, Kristine; Stump, Brian; Stead, Richard; Whitaker, Rod; Hayward, Chris

    2012-05-01

    We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases much better. These results suggest that measured infrasound from the Circleville earthquake is consistent with the generation of infrasound from body waves in the epicentral region.

  9. Integrating Unified Gravity Wave Physics into the NOAA Next Generation Global Prediction System

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Yudin, V.; Fuller-Rowell, T. J.; Akmaev, R. A.

    2017-12-01

    The Unified Gravity Wave Physics (UGWP) project for the Next Generation Global Prediction System (NGGPS) is a NOAA collaborative effort between the National Centers for Environmental Prediction (NCEP), Environemntal Modeling Center (EMC) and the University of Colorado, Cooperative Institute for Research in Environmental Sciences (CU-CIRES) to support upgrades and improvements of GW dynamics (resolved scales) and physics (sub-grid scales) in the NOAA Environmental Modeling System (NEMS)†. As envisioned the global climate, weather and space weather models of NEMS will substantially improve their predictions and forecasts with the resolution-sensitive (scale-aware) formulations planned under the UGWP framework for both orographic and non-stationary waves. In particular, the planned improvements for the Global Forecast System (GFS) model of NEMS are: calibration of model physics for higher vertical and horizontal resolution and an extended vertical range of simulations, upgrades to GW schemes, including the turbulent heating and eddy mixing due to wave dissipation and breaking, and representation of the internally-generated QBO. The main priority of the UGWP project is unified parameterization of orographic and non-orographic GW effects including momentum deposition in the middle atmosphere and turbulent heating and eddies due to wave dissipation and breaking. The latter effects are not currently represented in NOAA atmosphere models. The team has tested and evaluated four candidate GW solvers integrating the selected GW schemes into the NGGPS model. Our current work and planned activity is to implement the UGWP schemes in the first available GFS/FV3 (open FV3) configuration including adapted GFDL modification for sub-grid orography in GFS. Initial global model results will be shown for the operational and research GFS configuration for spectral and FV3 dynamical cores. †http://www.emc.ncep.noaa.gov/index.php?branch=NEMS

  10. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems.

    PubMed

    Joshi, Neelendra K; Rajotte, Edwin G; Naithani, Kusum J; Krawczyk, Greg; Hull, Larry A

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology.

  11. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems

    PubMed Central

    Joshi, Neelendra K.; Rajotte, Edwin G.; Naithani, Kusum J.; Krawczyk, Greg; Hull, Larry A.

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology. PMID:27713702

  12. A Study of Water Wave Wakes of Washington State Ferries

    NASA Astrophysics Data System (ADS)

    Perfect, Bradley; Riley, James; Thomson, Jim; Fay, Endicott

    2015-11-01

    Washington State Ferries (WSF) operates a ferry route that travels through a 600m-wide channel called Rich Passage. Concerns of shoreline erosion in Rich Passage have prompted this study of the generation and propagation of surface wave wakes caused by WSF vessels. The problem was addressed in three ways: analytically, using an extension of the Kelvin wake model by Darmon et al. (J. Fluid Mech., 738, 2014); computationally, employing a RANS Navier-Stokes model in the CFD code OpenFOAM which uses the Volume of Fluid method to treat the free surface; and with field data taken in Sept-Nov, 2014, using a suite of surface wave measuring buoys. This study represents one of the first times that model predictions of ferry boat-generated wakes can be tested against measurements in open waters. The results of the models and the field data are evaluated using direct comparison of predicted and measured surface wave height as well as other metrics. Furthermore, the model predictions and field measurements suggest differences in wake amplitudes for different class vessels. Finally, the relative strengths and weaknesses of each prediction method as well as of the field measurements will be discussed. Washington State Department of Transportation.

  13. Realization of BP neural network modeling based on NOXof CFB boiler in DCS

    NASA Astrophysics Data System (ADS)

    Bai, Jianyun; Zhu, Zhujun; Wang, Qi; Ying, Jiang

    2018-02-01

    In the CFB boiler installed with SNCR denitrification system, the mass concentration of NO X is difficult to be predicted by the conventional mathematical model, and the step response mathematical model, obtained by using the step disturbance test of ammonia injection,is inaccurate. this paper presents two kinds of BP neural network model, according to the relationship between the generated mass concentration of NO X and the load, the ratio of air to coal without using the SNCR system, as well as the relationship between the tested mass concentration of NO X and the load, the ratio of air to coal and the amount of ammonia using the SNCR system. then itrealized the on-line prediction of the mass concentration of NO X and the remaining mass concentration of NO X after reductionreaction in DCS system. the practical results show that the average error per hour between generation and the prediction of the amount of NO X mass concentration is within 10 mg/Nm3,the reducing reaction of measured and predicted hourly average error is within 2 mg/Nm3, all in error range, which provides a more accurate model for solvingthe problem on NO X automatic control of SNCR system.

  14. Solar prediction analysis

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B.

    1992-01-01

    Solar Activity prediction is essential to definition of orbital design and operational environments for space flight. This task provides the necessary research to better understand solar predictions being generated by the solar community and to develop improved solar prediction models. The contractor shall provide the necessary manpower and facilities to perform the following tasks: (1) review, evaluate, and assess the time evolution of the solar cycle to provide probable limits of solar cycle behavior near maximum end during the decline of solar cycle 22, and the forecasts being provided by the solar community and the techniques being used to generate these forecasts; and (2) develop and refine prediction techniques for short-term solar behavior flare prediction within solar active regions, with special emphasis on the correlation of magnetic shear with flare occurrence.

  15. Advanced Stirling Radioisotope Generator Thermal Power Model in Thermal Desktop SINDA/FLUINT Analyzer

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Fabanich, William A.; Schmitz, Paul C.

    2012-01-01

    This paper presents a three-dimensional Advanced Stirling Radioisotope Generator (ASRG) thermal power model that was built using the Thermal Desktop SINDA/FLUINT thermal analyzer. The model was correlated with ASRG engineering unit (EU) test data and ASRG flight unit predictions from Lockheed Martin's Ideas TMG thermal model. ASRG performance under (1) ASC hot-end temperatures, (2) ambient temperatures, and (3) years of mission for the general purpose heat source fuel decay was predicted using this model for the flight unit. The results were compared with those reported by Lockheed Martin and showed good agreement. In addition, the model was used to study the performance of the ASRG flight unit for operations on the ground and on the surface of Titan, and the concept of using gold film to reduce thermal loss through insulation was investigated.

  16. Modeling and predictions of biphasic mechanosensitive cell migration altered by cell-intrinsic properties and matrix confinement.

    PubMed

    Pathak, Amit

    2018-04-12

    Motile cells sense the stiffness of their extracellular matrix (ECM) through adhesions and respond by modulating the generated forces, which in turn lead to varying mechanosensitive migration phenotypes. Through modeling and experiments, cell migration speed is known to vary with matrix stiffness in a biphasic manner, with optimal motility at an intermediate stiffness. Here, we present a two-dimensional cell model defined by nodes and elements, integrated with subcellular modeling components corresponding to mechanotransductive adhesion formation, force generation, protrusions and node displacement. On 2D matrices, our calculations reproduce the classic biphasic dependence of migration speed on matrix stiffness and predict that cell types with higher force-generating ability do not slow down on very stiff matrices, thus disabling the biphasic response. We also predict that cell types defined by lower number of total receptors require stiffer matrices for optimal motility, which also limits the biphasic response. For a cell type with robust biphasic migration on 2D surface, simulations in channel-like confined environments of varying width and height predict faster migration in more confined matrices. Simulations performed in shallower channels predict that the biphasic mechanosensitive cell migration response is more robust on 2D micro-patterns as compared to the channel-like 3D confinement. Thus, variations in the dimensionality of matrix confinement alters the way migratory cells sense and respond to the matrix stiffness. Our calculations reveal new phenotypes of stiffness- and topography-sensitive cell migration that critically depend on both cell-intrinsic and matrix properties. These predictions may inform our understanding of various mechanosensitive modes of cell motility that could enable tumor invasion through topographically heterogeneous microenvironments. © 2018 IOP Publishing Ltd.

  17. Evolving hard problems: Generating human genetics datasets with a complex etiology.

    PubMed

    Himmelstein, Daniel S; Greene, Casey S; Moore, Jason H

    2011-07-07

    A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  18. A Population Genetics Model of Marker-Assisted Selection

    PubMed Central

    Luo, Z. W.; Thompson, R.; Woolliams, J. A.

    1997-01-01

    A deterministic two-loci model was developed to predict genetic response to marker-assisted selection (MAS) in one generation and in multiple generations. Formulas were derived to relate linkage disequilibrium in a population to the proportion of additive genetic variance used by MAS, and in turn to an extra improvement in genetic response over phenotypic selection. Predictions of the response were compared to those predicted by using an infinite-loci model and the factors affecting efficiency of MAS were examined. Theoretical analyses of the present study revealed the nonlinearity between the selection intensity and genetic response in MAS. In addition to the heritability of the trait and the proportion of the marker-associated genetic variance, the frequencies of the selectively favorable alleles at the two loci, one marker and one quantitative trait locus, were found to play an important role in determining both the short- and long-term efficiencies of MAS. The evolution of linkage disequilibrium and thus the genetic response over several generations were predicted theoretically and examined by simulation. MAS dissipated the disequilibrium more quickly than drift alone. In some cases studied, the rate of dissipation was as large as that to be expected in the circumstance where the true recombination fraction was increased by three times and selection was absent. PMID:9215918

  19. RFI Math Model programs for predicting intermodulation interference

    NASA Technical Reports Server (NTRS)

    Stafford, J. M.

    1974-01-01

    Receivers operating on a space vehicle or an aircraft having many on-board transmitters are subject to intermodulation interference from mixing in the transmitting antenna systems, the external environment, or the receiver front-ends. This paper presents the techniques utilized in RFI Math Model computer programs that were developed to aid in the prevention of interference by predicting problem areas prior to occurrence. Frequencies and amplitudes of possible intermodulation products generated in the external environment are calculated and compared to receiver sensitivities. Intermodulation products generated in receivers are evaluated to determine the adequacy of preselector ejection.

  20. Relation between social information processing and intimate partner violence in dating couples.

    PubMed

    Setchell, Sarah; Fritz, Patti Timmons; Glasgow, Jillian

    2017-07-01

    We used couple-level data to predict physical acts of intimate partner violence (IPV) from self-reported negative emotions and social information-processing (SIP) abilities among 100 dating couples (n = 200; mean age = 21.45 years). Participants read a series of hypothetical conflict situation vignettes and responded to questionnaires to assess negative emotions and various facets of SIP including attributions for partner behavior, generation of response alternatives, and response selection. We conducted a series of negative binomial mixed-model regressions based on the actor-partner interdependence model (APIM; Kenny, Kashy, & Cook, 2006, Dyadic data analysis. New York, NY: Guilford Press). There were significant results for the response generation and negative emotion models. Participants who generated fewer coping response alternatives were at greater risk of victimization (actor effect). Women were at greater risk of victimization if they had partners who generated fewer coping response alternatives (sex by partner interaction effect). Generation of less competent coping response alternatives predicted greater risk of perpetration among men, whereas generation of more competent coping response alternatives predicted greater risk of victimization among women (sex by actor interaction effects). Two significant actor by partner interaction effects were found for the negative emotion models. Participants who reported discrepant levels of negative emotions from their partners were at greatest risk of perpetration. Participants who reported high levels of negative emotions were at greatest risk of victimization if they had partners who reported low levels of negative emotions. This research has implications for researchers and clinicians interested in addressing the problem of IPV. Aggr. Behav. 43:329-341, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Saxon, Aron R; Keyser, Matthew A

    Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint Lithium-ion (Li-ion) batteries are being deployed on the electrical grid for a variety of purposes, such as to smooth fluctuations in solar renewable power generation. The lifetime of these batteries will vary depending on their thermal environment and how they are charged and discharged. To optimal utilization of a battery over its lifetime requires characterization of its performance degradation under different storage and cycling conditions. Aging tests were conducted on commercial graphite/nickel-manganese-cobalt (NMC) Li-ion cells. A general lifetime prognostic model framework is applied to model changes in capacity andmore » resistance as the battery degrades. Across 9 aging test conditions from 0oC to 55oC, the model predicts capacity fade with 1.4 percent RMS error and resistance growth with 15 percent RMS error. The model, recast in state variable form with 8 states representing separate fade mechanisms, is used to extrapolate lifetime for example applications of the energy storage system integrated with renewable photovoltaic (PV) power generation.« less

  2. Lubrication Theory Model to Evaluate Surgical Alterations in Flow Mechanics of the Lower Esophageal Sphincter

    NASA Astrophysics Data System (ADS)

    Ghosh, Sudip K.; Brasseur, James G.; Zaki, Tamer; Kahrilas, Peter J.

    2003-11-01

    Surgery is commonly used to rebuild a weak lower esophageal sphincter (LES) and reduce reflux. Because the driving pressure (DP) is proportional to muscle tension generated in the esophagus, we developed models using lubrication theory to evaluate the consequences of surgery on muscle force required to open the LES and drive the flow. The models relate time changes in DP to lumen geometry and trans-LES flow with a manometric catheter. Inertial effects were included and found negligible. Two models, direct (opening specified) and indirect (opening predicted), were combined with manometric pressure and imaging data from normal and post-surgery LES. A very high sensitivity was predicted between the details of the DP and LES opening. The indirect model accurately captured LES opening and predicted a 3-phase emptying process, with phases I and III requiring rapid generation of muscle tone to open the LES and empty the esophagus. Data showed that phases I and III are adversely altered by surgery causing incomplete emptying. Parametric model studies indicated that changes to the surgical procedure can positively alter LES flow mechanics and improve clinical outcomes.

  3. Changing clothes easily: connexin41.8 regulates skin pattern variation.

    PubMed

    Watanabe, Masakatsu; Kondo, Shigeru

    2012-05-01

    The skin patterns of animals are very important for their survival, yet the mechanisms involved in skin pattern formation remain unresolved. Turing's reaction-diffusion model presents a well-known mathematical explanation of how animal skin patterns are formed, and this model can predict various animal patterns that are observed in nature. In this study, we used transgenic zebrafish to generate various artificial skin patterns including a narrow stripe with a wide interstripe, a narrow stripe with a narrow interstripe, a labyrinth, and a 'leopard' pattern (or donut-like ring pattern). In this process, connexin41.8 (or its mutant form) was ectopically expressed using the mitfa promoter. Specifically, the leopard pattern was generated as predicted by Turing's model. Our results demonstrate that the pigment cells in animal skin have the potential and plasticity to establish various patterns and that the reaction-diffusion principle can predict skin patterns of animals. © 2012 John Wiley & Sons A/S.

  4. JIGSAW-GEO (1.0): Locally Orthogonal Staggered Unstructured Grid Generation for General Circulation Modelling on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2017-01-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  5. JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren

    2017-06-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  6. Mapping the global depth to bedrock for land surface modelling

    NASA Astrophysics Data System (ADS)

    Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.

    2017-12-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  7. Mapping the global depth to bedrock for land surface modeling

    NASA Astrophysics Data System (ADS)

    Shangguan, Wei; Hengl, Tomislav; Mendes de Jesus, Jorge; Yuan, Hua; Dai, Yongjiu

    2017-03-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 1,30,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surface reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forest and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250 m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  8. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports.

    PubMed

    Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Cai, T; Ebert, D D; Hwang, I; Li, J; de Jonge, P; Nierenberg, A A; Petukhova, M V; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M

    2016-10-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. Although efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine-learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared with observed scores assessed 10-12 years after baseline. ML model prediction accuracy was also compared with that of conventional logistic regression models. Area under the receiver operating characteristic curve based on ML (0.63 for high chronicity and 0.71-0.76 for the other prospective outcomes) was consistently higher than for the logistic models (0.62-0.70) despite the latter models including more predictors. A total of 34.6-38.1% of respondents with subsequent high persistence chronicity and 40.8-55.8% with the severity indicators were in the top 20% of the baseline ML-predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML-predicted risk distribution. These results confirm that clinically useful MDD risk-stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models.

  9. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports

    PubMed Central

    Kessler, Ronald C.; van Loo, Hanna M.; Wardenaar, Klaas J.; Bossarte, Robert M.; Brenner, Lisa A.; Cai, Tianxi; Ebert, David Daniel; Hwang, Irving; Li, Junlong; de Jonge, Peter; Nierenberg, Andrew A.; Petukhova, Maria V.; Rosellini, Anthony J.; Sampson, Nancy A.; Schoevers, Robert A.; Wilcox, Marsha A.; Zaslavsky, Alan M.

    2015-01-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. While efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity, and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1,056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared to observed scores assessed 10–12 years after baseline. ML model prediction accuracy was also compared to that of conventional logistic regression models. Area under the receiver operating characteristic curve (AUC) based on ML (.63 for high chronicity and .71–.76 for the other prospective outcomes) was consistently higher than for the logistic models (.62–.70) despite the latter models including more predictors. 34.6–38.1% of respondents with subsequent high persistence-chronicity and 40.8–55.8% with the severity indicators were in the top 20% of the baseline ML predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML predicted risk distribution. These results confirm that clinically useful MDD risk stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models. PMID:26728563

  10. The Next Generation of Community College Leaders

    ERIC Educational Resources Information Center

    McArdle, Michele K.

    2013-01-01

    Nearly a decade ago, Sullivan presented her interpretation of the four generations of community college leaders by describing them as "the founding fathers, the good managers, the collaborators, and the millennium generation" (Sullivan, 2001, p. 559). She predicted a shift to frames utilizing the Four-Frame Model of Leadership by Bolman…

  11. Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio

    1997-01-01

    In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.

  12. The panic attack-posttraumatic stress disorder model: applicability to orthostatic panic among Cambodian refugees.

    PubMed

    Hinton, Devon E; Hofmann, Stefan G; Pitman, Roger K; Pollack, Mark H; Barlow, David H

    2008-01-01

    This article examines the ability of the panic attack-posttraumatic stress disorder (PTSD) model to predict how panic attacks are generated and how panic attacks worsen PTSD. The article does so by determining the validity of the panic attack-PTSD model in respect to one type of panic attack among traumatized Cambodian refugees: orthostatic panic (OP) attacks (i.e. panic attacks generated by moving from lying or sitting to standing). Among Cambodian refugees attending a psychiatric clinic, the authors conducted two studies to explore the validity of the panic attack-PTSD model as applied to OP patients (i.e. patients with at least one episode of OP in the previous month). In Study 1, the panic attack-PTSD model accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious-depressive distress (Symptom Checklist-90-R subscales), and OP severity significantly mediated the effect of anxious-depressive distress on Clinician-Administered PTSD Scale severity. In Study 2, as predicted by the panic attack-PTSD model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a cognitive behavioural therapy study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity.

  13. Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Mengshoel, Ole

    2008-01-01

    Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.

  14. CONFOLD2: improved contact-driven ab initio protein structure modeling.

    PubMed

    Adhikari, Badri; Cheng, Jianlin

    2018-01-25

    Contact-guided protein structure prediction methods are becoming more and more successful because of the latest advances in residue-residue contact prediction. To support contact-driven structure prediction, effective tools that can quickly build tertiary structural models of good quality from predicted contacts need to be developed. We develop an improved contact-driven protein modelling method, CONFOLD2, and study how it may be effectively used for ab initio protein structure prediction with predicted contacts as input. It builds models using various subsets of input contacts to explore the fold space under the guidance of a soft square energy function, and then clusters the models to obtain the top five models. CONFOLD2 obtains an average reconstruction accuracy of 0.57 TM-score for the 150 proteins in the PSICOV contact prediction dataset. When benchmarked on the CASP11 contacts predicted using CONSIP2 and CASP12 contacts predicted using Raptor-X, CONFOLD2 achieves a mean TM-score of 0.41 on both datasets. CONFOLD2 allows to quickly generate top five structural models for a protein sequence when its secondary structures and contacts predictions at hand. The source code of CONFOLD2 is publicly available at https://github.com/multicom-toolbox/CONFOLD2/ .

  15. The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales

    NASA Technical Reports Server (NTRS)

    Koster, R. D.

    1999-01-01

    The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.

  16. Daily Streamflow Predictions in an Ungauged Watershed in Northern California Using the Precipitation-Runoff Modeling System (PRMS): Calibration Challenges when nearby Gauged Watersheds are Hydrologically Dissimilar

    NASA Astrophysics Data System (ADS)

    Dhakal, A. S.; Adera, S.

    2017-12-01

    Accurate daily streamflow prediction in ungauged watersheds with sparse information is challenging. The ability of a hydrologic model calibrated using nearby gauged watersheds to predict streamflow accurately depends on hydrologic similarities between the gauged and ungauged watersheds. This study examines daily streamflow predictions using the Precipitation-Runoff Modeling System (PRMS) for the largely ungauged San Antonio Creek watershed, a 96 km2 sub-watershed of the Alameda Creek watershed in Northern California. The process-based PRMS model is being used to improve the accuracy of recent San Antonio Creek streamflow predictions generated by two empirical methods. Although San Antonio Creek watershed is largely ungauged, daily streamflow data exists for hydrologic years (HY) 1913 - 1930. PRMS was calibrated for HY 1913 - 1930 using streamflow data, modern-day land use and PRISM precipitation distribution, and gauged precipitation and temperature data from a nearby watershed. The PRMS model was then used to generate daily streamflows for HY 1996-2013, during which the watershed was ungauged, and hydrologic responses were compared to two nearby gauged sub-watersheds of Alameda Creek. Finally, the PRMS-predicted daily flows between HY 1996-2013 were compared to the two empirically-predicted streamflow time series: (1) the reservoir mass balance method and (2) correlation of historical streamflows from 80 - 100 years ago between San Antonio Creek and a nearby sub-watershed located in Alameda Creek. While the mass balance approach using reservoir storage and transfers is helpful for estimating inflows to the reservoir, large discrepancies in daily streamflow estimation can arise. Similarly, correlation-based predicted daily flows which rely on a relationship from flows collected 80-100 years ago may not represent current watershed hydrologic conditions. This study aims to develop a method of streamflow prediction in the San Antonio Creek watershed by examining PRMS's model outputs as well as empirically generated flow data for their use in water resources management decisions. PRMS is also being used to better understand the streamflow patterns in the San Antonio Creek watershed for a variety of antecedent soil moisture conditions as the creek is generally dry between late Spring and early Fall.

  17. Explaining Entrepreneurial Behavior: Dispositional Personality Traits, Growth of Personal Entrepreneurial Resources, and Business Idea Generation

    ERIC Educational Resources Information Center

    Obschonka, Martin; Silbereisen, Rainer K.; Schmitt-Rodermund, Eva

    2012-01-01

    Applying a life-span approach of human development and using the example of science-based business idea generation, the authors used structural equation modeling to test a mediation model for predicting entrepreneurial behavior in a sample of German scientists (2 measurement occasions; Time 1, N = 488). It was found that recalled early…

  18. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  20. An Alternative Procedure for Estimating Unit Learning Curves,

    DTIC Science & Technology

    1985-09-01

    the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of

  1. Real time wave forecasting using wind time history and numerical model

    NASA Astrophysics Data System (ADS)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  2. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  3. Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V., Dr.; Burley, Casey L.

    2011-01-01

    The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.

  4. Photodynamic therapy: computer modeling of diffusion and reaction phenomena

    NASA Astrophysics Data System (ADS)

    Hampton, James A.; Mahama, Patricia A.; Fournier, Ronald L.; Henning, Jeffery P.

    1996-04-01

    We have developed a transient, one-dimensional mathematical model for the reaction and diffusion phenomena that occurs during photodynamic therapy (PDT). This model is referred to as the PDTmodem program. The model is solved by the Crank-Nicholson finite difference technique and can be used to predict the fates of important molecular species within the intercapillary tissue undergoing PDT. The following factors govern molecular oxygen consumption and singlet oxygen generation within a tumor: (1) photosensitizer concentration; (2) fluence rate; and (3) intercapillary spacing. In an effort to maximize direct tumor cell killing, the model allows educated decisions to be made to insure the uniform generation and exposure of singlet oxygen to tumor cells across the intercapillary space. Based on predictions made by the model, we have determined that the singlet oxygen concentration profile within the intercapillary space is controlled by the product of the drug concentration, and light fluence rate. The model predicts that at high levels of this product, within seconds singlet oxygen generation is limited to a small core of cells immediately surrounding the capillary. The remainder of the tumor tissue in the intercapillary space is anoxic and protected from the generation and toxic effects of singlet oxygen. However, at lower values of this product, the PDT-induced anoxic regions are not observed. An important finding is that an optimal value of this product can be defined that maintains the singlet oxygen concentration throughout the intercapillary space at a near constant level. Direct tumor cell killing is therefore postulated to depend on the singlet oxygen exposure, defined as the product of the uniform singlet oxygen concentration and the time of exposure, and not on the total light dose.

  5. Generational forecasting in academic medicine: a unique method of planning for success in the next two decades.

    PubMed

    Howell, Lydia Pleotis; Joad, Jesse P; Callahan, Edward; Servis, Gregg; Bonham, Ann C

    2009-08-01

    Multigenerational teams are essential to the missions of academic health centers (AHCs). Generational forecasting using Strauss and Howe's predictive model, "the generational diagonal," can be useful for anticipating and addressing issues so that each generation is effective. Forecasts are based on the observation that cyclical historical events are experienced by all generations, but the response of each generation differs according to its phase of life and previous defining experiences. This article relates Strauss and Howe's generational forecasts to AHCs. Predicted issues such as work-life balance, indebtedness, and succession planning have existed previously, but they now have different causes or consequences because of the unique experiences and life stages of current generations. Efforts to address these issues at the authors' AHC include a work-life balance workgroup, expanded leave, and intramural grants.

  6. Implications of Higgs searches on the four-generation standard model.

    PubMed

    Kuflik, Eric; Nir, Yosef; Volansky, Tomer

    2013-03-01

    Within the four-generation standard model, the Higgs couplings to gluons and to photons deviate in a significant way from the predictions of the three-generation standard model. As a consequence, large departures in several Higgs production and decay channels are expected. Recent Higgs search results, presented by ATLAS, CMS, and CDF, hint on the existence of a Higgs boson with a mass around 125 GeV. Using these results and assuming such a Higgs boson, we derive exclusion limits on the four-generation standard model. For m(H)=125 GeV, the model is excluded above 99.95% confidence level. For 124.5 GeV≤m(H)≤127.5 GeV, an exclusion limit above 99% confidence level is found.

  7. A spatially distributed model for the dynamic prediction of sediment erosion and transport in mountainous forested watersheds

    NASA Astrophysics Data System (ADS)

    Doten, Colleen O.; Bowling, Laura C.; Lanini, Jordan S.; Maurer, Edwin P.; Lettenmaier, Dennis P.

    2006-04-01

    Erosion and sediment transport in a temperate forested watershed are predicted with a new sediment model that represents the main sources of sediment generation in forested environments (mass wasting, hillslope erosion, and road surface erosion) within the distributed hydrology-soil-vegetation model (DHSVM) environment. The model produces slope failures on the basis of a factor-of-safety analysis with the infinite slope model through use of stochastically generated soil and vegetation parameters. Failed material is routed downslope with a rule-based scheme that determines sediment delivery to streams. Sediment from hillslopes and road surfaces is also transported to the channel network. A simple channel routing scheme is implemented to predict basin sediment yield. We demonstrate through an initial application of this model to the Rainy Creek catchment, a tributary of the Wenatchee River, which drains the east slopes of the Cascade Mountains, that the model produces plausible sediment yield and ratios of landsliding and surface erosion when compared to published rates for similar catchments in the Pacific Northwest. A road removal scenario and a basin-wide fire scenario are both evaluated with the model.

  8. A genome-scale metabolic flux model of Escherichia coli K–12 derived from the EcoCyc database

    PubMed Central

    2014-01-01

    Background Constraint-based models of Escherichia coli metabolic flux have played a key role in computational studies of cellular metabolism at the genome scale. We sought to develop a next-generation constraint-based E. coli model that achieved improved phenotypic prediction accuracy while being frequently updated and easy to use. We also sought to compare model predictions with experimental data to highlight open questions in E. coli biology. Results We present EcoCyc–18.0–GEM, a genome-scale model of the E. coli K–12 MG1655 metabolic network. The model is automatically generated from the current state of EcoCyc using the MetaFlux software, enabling the release of multiple model updates per year. EcoCyc–18.0–GEM encompasses 1445 genes, 2286 unique metabolic reactions, and 1453 unique metabolites. We demonstrate a three-part validation of the model that breaks new ground in breadth and accuracy: (i) Comparison of simulated growth in aerobic and anaerobic glucose culture with experimental results from chemostat culture and simulation results from the E. coli modeling literature. (ii) Essentiality prediction for the 1445 genes represented in the model, in which EcoCyc–18.0–GEM achieves an improved accuracy of 95.2% in predicting the growth phenotype of experimental gene knockouts. (iii) Nutrient utilization predictions under 431 different media conditions, for which the model achieves an overall accuracy of 80.7%. The model’s derivation from EcoCyc enables query and visualization via the EcoCyc website, facilitating model reuse and validation by inspection. We present an extensive investigation of disagreements between EcoCyc–18.0–GEM predictions and experimental data to highlight areas of interest to E. coli modelers and experimentalists, including 70 incorrect predictions of gene essentiality on glucose, 80 incorrect predictions of gene essentiality on glycerol, and 83 incorrect predictions of nutrient utilization. Conclusion Significant advantages can be derived from the combination of model organism databases and flux balance modeling represented by MetaFlux. Interpretation of the EcoCyc database as a flux balance model results in a highly accurate metabolic model and provides a rigorous consistency check for information stored in the database. PMID:24974895

  9. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  10. Cheminformatics analysis of assertions mined from literature that describe drug-induced liver injury in different species.

    PubMed

    Fourches, Denis; Barnes, Julie C; Day, Nicola C; Bradley, Paul; Reed, Jane Z; Tropsha, Alexander

    2010-01-01

    Drug-induced liver injury is one of the main causes of drug attrition. The ability to predict the liver effects of drug candidates from their chemical structures is critical to help guide experimental drug discovery projects toward safer medicines. In this study, we have compiled a data set of 951 compounds reported to produce a wide range of effects in the liver in different species, comprising humans, rodents, and nonrodents. The liver effects for this data set were obtained as assertional metadata, generated from MEDLINE abstracts using a unique combination of lexical and linguistic methods and ontological rules. We have analyzed this data set using conventional cheminformatics approaches and addressed several questions pertaining to cross-species concordance of liver effects, chemical determinants of liver effects in humans, and the prediction of whether a given compound is likely to cause a liver effect in humans. We found that the concordance of liver effects was relatively low (ca. 39-44%) between different species, raising the possibility that species specificity could depend on specific features of chemical structure. Compounds were clustered by their chemical similarity, and similar compounds were examined for the expected similarity of their species-dependent liver effect profiles. In most cases, similar profiles were observed for members of the same cluster, but some compounds appeared as outliers. The outliers were the subject of focused assertion regeneration from MEDLINE as well as other data sources. In some cases, additional biological assertions were identified, which were in line with expectations based on compounds' chemical similarities. The assertions were further converted to binary annotations of underlying chemicals (i.e., liver effect vs no liver effect), and binary quantitative structure-activity relationship (QSAR) models were generated to predict whether a compound would be expected to produce liver effects in humans. Despite the apparent heterogeneity of data, models have shown good predictive power assessed by external 5-fold cross-validation procedures. The external predictive power of binary QSAR models was further confirmed by their application to compounds that were retrieved or studied after the model was developed. To the best of our knowledge, this is the first study for chemical toxicity prediction that applied QSAR modeling and other cheminformatics techniques to observational data generated by the means of automated text mining with limited manual curation, opening up new opportunities for generating and modeling chemical toxicology data.

  11. Comparison between model predictions and observations of ELF radio atmospherics generated by rocket-triggered lightning

    NASA Astrophysics Data System (ADS)

    Dupree, N. A.; Moore, R. C.

    2011-12-01

    Model predictions of the ELF radio atmospheric generated by rocket-triggered lightning are compared with observations performed at Arrival Heights, Antarctica. The ability to infer source characteristics using observations at great distances may prove to greatly enhance the understanding of lightning processes that are associated with the production of transient luminous events (TLEs) as well as other ionospheric effects associated with lightning. The modeling of the sferic waveform is carried out using a modified version of the Long Wavelength Propagation Capability (LWPC) code developed by the Naval Ocean Systems Center over a period of many years. LWPC is an inherently narrowband propagation code that has been modified to predict the broadband response of the Earth-ionosphere waveguide to an impulsive lightning flash while preserving the ability of LWPC to account for an inhomogeneous waveguide. ELF observations performed at Arrival Heights, Antarctica during rocket-triggered lightning experiments at the International Center for Lightning Research and Testing (ICLRT) located at Camp Blanding, Florida are presented. The lightning current waveforms directly measured at the base of the lightning channel (at the ICLRT) are used together with LWPC to predict the sferic waveform observed at Arrival Heights under various ionospheric conditions. This paper critically compares observations with model predictions.

  12. Generating Adaptive Behaviour within a Memory-Prediction Framework

    PubMed Central

    Rawlinson, David; Kowadlo, Gideon

    2012-01-01

    The Memory-Prediction Framework (MPF) and its Hierarchical-Temporal Memory implementation (HTM) have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex. We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play “rocks, paper, scissors” against a predictable opponent. PMID:22272231

  13. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  14. Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell

    2014-01-01

    Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.

  15. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

    2018-05-01

    This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  <  0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.

  16. Thermal modeling of the lithium/polymer battery

    NASA Astrophysics Data System (ADS)

    Pals, C. R.

    1994-10-01

    Research in the area of advanced batteries for electric-vehicle applications has increased steadily since the 1990 zero-emission-vehicle mandate of the California Air Resources Board. Due to their design flexibility and potentially high energy and power densities, lithium/polymer batteries are an emerging technology for electric-vehicle applications. Thermal modeling of lithium/polymer batteries is particularly important because the transport properties of the system depend exponentially on temperature. Two models have been presented for assessment of the thermal behavior of lithium/polymer batteries. The one-cell model predicts the cell potential, the concentration profiles, and the heat-generation rate during discharge. The cell-stack model predicts temperature profiles and heat transfer limitations of the battery. Due to the variation of ionic conductivity and salt diffusion coefficient with temperature, the performance of the lithium/polymer battery is greatly affected by temperature. Because of this variation, it is important to optimize the cell operating temperature and design a thermal management system for the battery. Since the thermal conductivity of the polymer electrolyte is very low, heat is not easily conducted in the direction perpendicular to cell layers. Temperature profiles in the cells are not as significant as expected because heat-generation rates in warmer areas of the cell stack are lower than heat-generation rates in cooler areas of the stack. This nonuniform heat-generation rate flattens the temperature profile. Temperature profiles as calculated by this model are not as steep as those calculated by previous models that assume a uniform heat-generation rate.

  17. Periodic acoustic radiation from a low aspect ratio propeller

    NASA Astrophysics Data System (ADS)

    Muench, John David

    An experimental program was conducted with the objective of providing high fidelity measurements of propeller inflow, unsteady blade surface pressures, and discrete acoustic radiation over a wide range of speeds. Anechoic wind tunnel experiments were preformed using the SISUP propeller. The upstream stator blades generate large wake deficits that result in periodic unsteady blade forces that acoustically radiate at blade passing frequency and higher harmonics. The experimental portion of this research successfully measured the inflow velocity, blade span unsteady pressures and directive characteristics of the blade-rate radiated noise associated with this complex propeller geometry while the propeller was operating on design. The spatial harmonic decomposition of the inflow revealed significant coefficients at 8, 16 and 24. The magnitude of the unsteady blade forces scale as U4 and linearly shift in frequency with speed. The magnitude of the discrete frequency acoustic levels associated with blade rate scale as U6 and also shift linearly with speed. At blade-rate, the far-field acoustic directivity has a dipole-like directivity oriented perpendicular to the inflow. At the first harmonic of blade-rate, the far-field directivity is not as well defined. The experimental inflow and blade surface pressure results were used to generate an acoustic prediction at blade rate based on a blade strip theory model developed by Blake (1986). The predicted acoustic levels were compared to the experimental results. The model adequately predicts the measured sound field at blade rate at 120 ft/sec. Radiated noise at blade-rate for 120 ft/s can be described by a dipole, whose orientation is perpendicular to the flow and is generated by the interaction of the rotating propeller with the 8th harmonic of the inflow. At blade-rate for 60 ft/s, the model under predicts measured levels. At the first harmonic of blade-rate, for 120 ft/s, the sound field is described as a combination of dipole sources, one generated by the 16 th harmonic, perpendicular to the inflow, and the other generated by the 12th harmonic of the inflow parallel to the inflow. At the first harmonic of blade-rate for 60 ft/s, the model under predicts measured levels.

  18. Carbon deposition model for oxygen-hydrocarbon combustion. Task 6: Data analysis and formulation of an empirical model

    NASA Technical Reports Server (NTRS)

    Makel, Darby B.; Rosenberg, Sanders D.

    1990-01-01

    The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.

  19. How health leaders can benefit from predictive analytics.

    PubMed

    Giga, Aliyah

    2017-11-01

    Predictive analytics can support a better integrated health system providing continuous, coordinated, and comprehensive person-centred care to those who could benefit most. In addition to dollars saved, using a predictive model in healthcare can generate opportunities for meaningful improvements in efficiency, productivity, costs, and better population health with targeted interventions toward patients at risk.

  20. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  1. Multiscale Systems Analysis of Root Growth and Development: Modeling Beyond the Network and Cellular Scales

    PubMed Central

    Band, Leah R.; Fozard, John A.; Godin, Christophe; Jensen, Oliver E.; Pridmore, Tony; Bennett, Malcolm J.; King, John R.

    2012-01-01

    Over recent decades, we have gained detailed knowledge of many processes involved in root growth and development. However, with this knowledge come increasing complexity and an increasing need for mechanistic modeling to understand how those individual processes interact. One major challenge is in relating genotypes to phenotypes, requiring us to move beyond the network and cellular scales, to use multiscale modeling to predict emergent dynamics at the tissue and organ levels. In this review, we highlight recent developments in multiscale modeling, illustrating how these are generating new mechanistic insights into the regulation of root growth and development. We consider how these models are motivating new biological data analysis and explore directions for future research. This modeling progress will be crucial as we move from a qualitative to an increasingly quantitative understanding of root biology, generating predictive tools that accelerate the development of improved crop varieties. PMID:23110897

  2. Three-Dimensional Magnetic Analysis Technique Developed for Evaluating Stirling Convertor Linear Alternators

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    2003-01-01

    The Department of Energy, the Stirling Technology Company (STC), and the NASA Glenn Research Center are developing Stirling convertors for Stirling radioisotope generators to provide electrical power for future NASA deep space missions. STC is developing the 55-We technology demonstration convertor (TDC) under contract to the Department of Energy. The Department of Energy recently named Lockheed Martin as the system integration contractor for the Stirling radioisotope generator development project. Lockheed Martin will develop the Stirling radioisotope generator engineering unit and has contract options to develop the qualification unit and the first flight unit. Glenn s role includes an in-house project to provide convertor, component, and materials testing and evaluation in support of the overall power system development. As a part of this work, Glenn has established an in-house Stirling research laboratory for testing, analyzing, and evaluating Stirling machines. STC has built four 55-We convertors for NASA, and these are being tested at Glenn. A cross-sectional view of the 55-We TDC is shown in the figure. Of critical importance to the successful development of the Stirling convertor for space power applications is the development of a lightweight and highly efficient linear alternator. In support, Glenn has been developing finite element analysis and finite element method tools for performing various linear alternator thermal and electromagnetic analyses and evaluating design configurations. A three-dimensional magnetostatic finite element model of STC's 55-We TDC linear alternator was developed to evaluate the demagnetization fields affecting the alternator magnets. Since the actual linear alternator hardware is symmetric to the quarter section about the axis of motion, only a quarter section of the alternator was modeled. The components modeled included the mover laminations, the neodymium-iron-boron magnets, the stator laminations, and the copper coils. The three-dimensional magnetostatic model was then coupled with a circuit simulator model of the alternator load and convertor controller. The coupled model was then used to generate alternator terminal voltage and current predictions. The predicted voltage and current waveforms agreed well with the experimental data, which tended to validate the accuracy of the coupled model. The model was then used to generate predictions of the demagnetization fields acting on the alternator magnets for the alternator under load. The preliminary model predictions indicate that the highest potential for demagnetization is along the inside surface of the uncovered magnets. The demagnetization field for the uncovered magnets when the mover is positioned at the end of a stroke is higher than it is when the mover is at the position of maximum induced voltage or maximum alternator current. Assuming normal load conditions, the model predicted that the onset of demagnetization is most likely to occur for magnet temperatures above 101 C.

  3. Fuels planning: science synthesis and integration; environmental consequences fact sheet 12: Water Erosion Prediction Project (WEPP) Fuel Management (FuMe) tool

    Treesearch

    William Elliot; David Hall

    2005-01-01

    The Water Erosion Prediction Project (WEPP) Fuel Management (FuMe) tool was developed to estimate sediment generated by fuel management activities. WEPP FuMe estimates sediment generated for 12 fuel-related conditions from a single input. This fact sheet identifies the intended users and uses, required inputs, what the model does, and tells the user how to obtain the...

  4. The 5th Generation model of Particle Physics

    NASA Astrophysics Data System (ADS)

    Lach, Theodore

    2009-05-01

    The Standard model of Particle Physics is able to account for all known HEP phenomenon, yet it is not able to predict the masses of the quarks or leptons nor can it explain why they have their respective values. The Checker Board Model (CBM) predicts that there are 5 generation of quarks and leptons and shows a pattern to those masses, namely each three quarks or leptons (within adjacent generations or within a generation) are related to each other by a geometric mean relationship. A 2D structure of the nucleus can be imaged as 2D plate spinning on its axis, it would for all practical circumstances appear to be a 3D object. The masses of the hypothesized ``up'' and ``dn'' quarks determined by the CBM are 237.31 MeV and 42.392 MeV respectively. These new quarks in addition to a lepton of 7.4 MeV make up one of the missing generations. The details of this new particle physics model can be found at the web site: checkerboard.dnsalias.net. The only areas were this theory conflicts with existing dogma is in the value of the mass of the Top quark. The particle found at Fermi Lab must be some sort of composite particle containing Top quarks.

  5. Mechanistic, Mathematical Model to Predict the Dynamics of Tissue Genesis in Bone Defects via Mechanical Feedback and Mediation of Biochemical Factors

    PubMed Central

    Moore, Shannon R.; Saidel, Gerald M.; Knothe, Ulf; Knothe Tate, Melissa L.

    2014-01-01

    The link between mechanics and biology in the generation and the adaptation of bone has been well studied in context of skeletal development and fracture healing. Yet, the prediction of tissue genesis within - and the spatiotemporal healing of - postnatal defects, necessitates a quantitative evaluation of mechano-biological interactions using experimental and clinical parameters. To address this current gap in knowledge, this study aims to develop a mechanistic mathematical model of tissue genesis using bone morphogenetic protein (BMP) to represent of a class of factors that may coordinate bone healing. Specifically, we developed a mechanistic, mathematical model to predict the dynamics of tissue genesis by periosteal progenitor cells within a long bone defect surrounded by periosteum and stabilized via an intramedullary nail. The emergent material properties and mechanical environment associated with nascent tissue genesis influence the strain stimulus sensed by progenitor cells within the periosteum. Using a mechanical finite element model, periosteal surface strains are predicted as a function of emergent, nascent tissue properties. Strains are then input to a mechanistic mathematical model, where mechanical regulation of BMP-2 production mediates rates of cellular proliferation, differentiation and tissue production, to predict healing outcomes. A parametric approach enables the spatial and temporal prediction of endochondral tissue regeneration, assessed as areas of cartilage and mineralized bone, as functions of radial distance from the periosteum and time. Comparing model results to histological outcomes from two previous studies of periosteum-mediated bone regeneration in a common ovine model, it was shown that mechanistic models incorporating mechanical feedback successfully predict patterns (spatial) and trends (temporal) of bone tissue regeneration. The novel model framework presented here integrates a mechanistic feedback system based on the mechanosensitivity of periosteal progenitor cells, which allows for modeling and prediction of tissue regeneration on multiple length and time scales. Through combination of computational, physical and engineering science approaches, the model platform provides a means to test new hypotheses in silico and to elucidate conditions conducive to endogenous tissue genesis. Next generation models will serve to unravel intrinsic differences in bone genesis by endochondral and intramembranous mechanisms. PMID:24967742

  6. Building and validating a prediction model for paediatric type 1 diabetes risk using next generation targeted sequencing of class II HLA genes.

    PubMed

    Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke

    2017-11-01

    It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Determination of Spatially Resolved Tablet Density and Hardness Using Near-Infrared Chemical Imaging (NIR-CI).

    PubMed

    Talwar, Sameer; Roopwani, Rahul; Anderson, Carl A; Buckner, Ira S; Drennen, James K

    2017-08-01

    Near-infrared chemical imaging (NIR-CI) combines spectroscopy with digital imaging, enabling spatially resolved analysis and characterization of pharmaceutical samples. Hardness and relative density are critical quality attributes (CQA) that affect tablet performance. Intra-sample density or hardness variability can reveal deficiencies in formulation design or the tableting process. This study was designed to develop NIR-CI methods to predict spatially resolved tablet density and hardness. The method was implemented using a two-step procedure. First, NIR-CI was used to develop a relative density/solid fraction (SF) prediction method for pure microcrystalline cellulose (MCC) compacts only. A partial least squares (PLS) model for predicting SF was generated by regressing the spectra of certain representative pixels selected from each image against the compact SF. Pixel selection was accomplished with a threshold based on the Euclidean distance from the median tablet spectrum. Second, micro-indentation was performed on the calibration compacts to obtain hardness values. A univariate model was developed by relating the empirical hardness values to the NIR-CI predicted SF at the micro-indented pixel locations: this model generated spatially resolved hardness predictions for the entire tablet surface.

  8. Development and Validation of a Computational Model for Predicting the Behavior of Plumes from Large Solid Rocket Motors

    NASA Technical Reports Server (NTRS)

    Wells, Jason E.; Black, David L.; Taylor, Casey L.

    2013-01-01

    Exhaust plumes from large solid rocket motors fired at ATK's Promontory test site carry particulates to high altitudes and typically produce deposits that fall on regions downwind of the test area. As populations and communities near the test facility grow, ATK has become increasingly concerned about the impact of motor testing on those surrounding communities. To assess the potential impact of motor testing on the community and to identify feasible mitigation strategies, it is essential to have a tool capable of predicting plume behavior downrange of the test stand. A software package, called PlumeTracker, has been developed and validated at ATK for this purpose. The code is a point model that offers a time-dependent, physics-based description of plume transport and precipitation. The code can utilize either measured or forecasted weather data to generate plume predictions. Next-Generation Radar (NEXRAD) data and field observations from twenty-three historical motor test fires at Promontory were collected to test the predictive capability of PlumeTracker. Model predictions for plume trajectories and deposition fields were found to correlate well with the collected dataset.

  9. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    EPA Science Inventory

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  10. Evaluation of the Interactionist Model of Socioeconomic Status and Problem Behavior: A Developmental Cascade across Generations

    PubMed Central

    Martin, Monica J.; Conger, Rand D.; Schofield, Thomas J.; Dogan, Shannon J.; Widaman, Keith F.; Donnellan, M. Brent; Neppl, Tricia K.

    2010-01-01

    The current multigenerational study evaluates the utility of the Interactionist Model of Socioeconomic Influence on human development (IMSI) in explaining problem behaviors across generations. The IMSI proposes that the association between socioeconomic status (SES) and human development involves a dynamic interplay that includes both social causation (SES influences human development) and social selection (individual characteristics affect SES). As part of the developmental cascade proposed by the IMSI, the findings from this investigation showed that G1 adolescent problem behavior predicted later G1 SES, family stress, and parental emotional investments, as well as the next generation of children's problem behavior. These results are consistent with a social selection view. Consistent with the social causation perspective, we found a significant relation between G1 SES and family stress, and in turn, family stress predicted G2 problem behavior. Finally, G1 adult SES predicted both material and emotional investments in the G2 child. In turn, emotional investments predicted G2 problem behavior, as did material investments. Some of the predicted pathways varied by G1 parent gender. The results are consistent with the view that processes of both social selection and social causation account for the association between SES and human development. PMID:20576188

  11. An Interoceptive Predictive Coding Model of Conscious Presence

    PubMed Central

    Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.

    2011-01-01

    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673

  12. THE PANIC ATTACK–PTSD MODEL: APPLICABILITY TO ORTHOSTATIC PANIC AMONG CAMBODIAN REFUGEES

    PubMed Central

    Hinton, Devon E.; Hofmann, Stefan G.; Pitman, Roger K.; Pollack, Mark H.; Barlow, David H.

    2009-01-01

    This article examines the ability of the “Panic Attack–PTSD Model” to predict how panic attacks are generated and how panic attacks worsen posttraumatic stress disorder (PTSD). The article does so by determining the validity of the Panic Attack–PTSD Model in respect to one type of panic attacks among traumatized Cambodian refugees: orthostatic panic (OP) attacks, that is, panic attacks generated by moving from lying or sitting to standing. Among Cambodian refugees attending a psychiatric clinic, we conducted two studies to explore the validity of the Panic Attack–PTSD Model as applied to OP patients, meaning patients with at least one episode of OP in the previous month. In Study 1, the “Panic Attack–PTSD Model” accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious–depressive distress (SCL subscales), and OP severity significantly mediated the effect of anxious–depressive distress on CAPS severity. In Study 2, as predicted by the Panic Attack–PTSD Model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a CBT study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity. PMID:18470741

  13. Toward Big Data Analytics: Review of Predictive Models in Management of Diabetes and Its Complications.

    PubMed

    Cichosz, Simon Lebech; Johansen, Mette Dencker; Hejlesen, Ole

    2015-10-14

    Diabetes is one of the top priorities in medical science and health care management, and an abundance of data and information is available on these patients. Whether data stem from statistical models or complex pattern recognition models, they may be fused into predictive models that combine patient information and prognostic outcome results. Such knowledge could be used in clinical decision support, disease surveillance, and public health management to improve patient care. Our aim was to review the literature and give an introduction to predictive models in screening for and the management of prevalent short- and long-term complications in diabetes. Predictive models have been developed for management of diabetes and its complications, and the number of publications on such models has been growing over the past decade. Often multiple logistic or a similar linear regression is used for prediction model development, possibly owing to its transparent functionality. Ultimately, for prediction models to prove useful, they must demonstrate impact, namely, their use must generate better patient outcomes. Although extensive effort has been put in to building these predictive models, there is a remarkable scarcity of impact studies. © 2015 Diabetes Technology Society.

  14. Developing a Model and Applications for Probabilities of Student Success: A Case Study of Predictive Analytics

    ERIC Educational Resources Information Center

    Calvert, Carol Elaine

    2014-01-01

    This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…

  15. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    ERIC Educational Resources Information Center

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  16. Atmospheric radiance interpolation for the modeling of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony

    2008-04-01

    The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.

  17. Next-generation prognostic assessment for diffuse large B-cell lymphoma

    PubMed Central

    Staton, Ashley D; Kof, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts. PMID:26289217

  18. Next-generation prognostic assessment for diffuse large B-cell lymphoma.

    PubMed

    Staton, Ashley D; Koff, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts.

  19. A model for the generation of two-dimensional surf beat

    USGS Publications Warehouse

    List, Jeffrey H.

    1992-01-01

    A finite difference model predicting group-forced long waves in the nearshore is constructed with two interacting parts: an incident wave model providing time-varying radiation stress gradients across the nearshore, and a long-wave model which solves the equations of motion for the forcing imposed by the incident waves. Both shallow water group-bound long waves and long waves generated by a time-varying breakpoint are simulated. Model-generated time series are used to calculate the cross correlation between wave groups and long waves through the surf zone. The cross-correlation signal first observed by Tucker (1950) is well predicted. For the first time, this signal is decomposed into the contributions from the two mechanisms of leaky mode forcing. Results show that the cross-correlation signal can be explained by bound long waves which are amplified, though strongly modified, through the surf zone before reflection from the shoreline. The breakpoint-forced long waves are added to the bound long waves at a phase of pi/2 and are a secondary contribution owing to their relatively small size.

  20. The Standard Model Algebra - a summary

    NASA Astrophysics Data System (ADS)

    Cristinel Stoica, Ovidiu

    2017-08-01

    A generation of leptons and quarks and the gauge symmetries of the Standard Model can be obtained from the Clifford algebra ℂℓ 6. An instance of ℂℓ 6 is implicitly generated by the Dirac algebra combined with the electroweak symmetry, while the color symmetry gives another instance of ℂℓ 6 with a Witt decomposition. The minimal mathematical model proposed here results by identifying the two instances of ℂℓ 6. The left ideal decomposition generated by the Witt decomposition represents the leptons and quarks, and their antiparticles. The SU(3)c and U(1)em symmetries of the SM are the symmetries of this ideal decomposition. The patterns of electric charges, colors, chirality, weak isospins, and hypercharges, follow from this, without predicting additional particles or forces, or proton decay. The electroweak symmetry is present in its broken form, due to the geometry. The predicted Weinberg angle is given by sin2 W = 0.25. The model shares common features with previously known models, particularly with Chisholm and Farwell, 1996, Trayling and Baylis, 2004, and Furey, 2016.

  1. The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.

    PubMed

    Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina

    2018-05-23

    Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.

  2. Protein and oil composition predictions of single soybeans by transmission Raman spectroscopy.

    PubMed

    Schulmerich, Matthew V; Walsh, Michael J; Gelber, Matthew K; Kong, Rong; Kole, Matthew R; Harrison, Sandra K; McKinney, John; Thompson, Dennis; Kull, Linda S; Bhargava, Rohit

    2012-08-22

    The soybean industry requires rapid, accurate, and precise technologies for the analyses of seed/grain constituents. While the current gold standard for nondestructive quantification of economically and nutritionally important soybean components is near-infrared spectroscopy (NIRS), emerging technology may provide viable alternatives and lead to next generation instrumentation for grain compositional analysis. In principle, Raman spectroscopy provides the necessary chemical information to generate models for predicting the concentration of soybean constituents. In this communication, we explore the use of transmission Raman spectroscopy (TRS) for nondestructive soybean measurements. We show that TRS uses the light scattering properties of soybeans to effectively homogenize the heterogeneous bulk of a soybean for representative sampling. Working with over 1000 individual intact soybean seeds, we developed a simple partial least-squares model for predicting oil and protein content nondestructively. We find TRS to have a root-mean-standard error of prediction (RMSEP) of 0.89% for oil measurements and 0.92% for protein measurements. In both calibration and validation sets, the predicative capabilities of the model were similar to the error in the reference methods.

  3. The interpretation of hard X-ray polarization measurements in solar flares

    NASA Technical Reports Server (NTRS)

    Leach, J.; Emslie, A. G.; Petrosian, V.

    1983-01-01

    Observations of polarization of moderately hard X-rays in solar flares are reviewed and compared with the predictions of recent detailed modeling of hard X-ray bremsstrahlung production by non-thermal electrons. The recent advances in the complexity of the modeling lead to substantially lower predicted polarizations than in earlier models and more fully highlight how various parameters play a role in determining the polarization of the radiation field. The new predicted polarizations are comparable to those predicted by thermal modeling of solar flare hard X-ray production, and both are in agreement with the observations. In the light of these results, new polarization observations with current generation instruments are proposed which could be used to discriminate between non-thermal and thermal models of hard X-ray production in solar flares.

  4. An SOA model for toluene oxidation in the presence of inorganic aerosols.

    PubMed

    Cao, Gang; Jang, Myoseon

    2010-01-15

    A predictive model for secondary organic aerosol (SOA) formation including both partitioning and heterogeneous reactions is explored for the SOA produced from the oxidation of toluene in the presence of inorganic seed aerosols. The predictive SOA model comprises the explicit gas-phase chemistry of toluene, gas-particle partitioning, and heterogeneous chemistry. The resulting products from the explicit gas phase chemistry are lumped into several classes of chemical species based on their vapor pressure and reactivity for heterogeneous reactions. Both the gas-particle partitioning coefficient and the heterogeneous reaction rate constant of each lumped gas-phase product are theoretically determined using group contribution and molecular structure-reactivity. In the SOA model, the predictive SOA mass is decoupled into partitioning (OM(P)) and heterogeneous aerosol production (OM(H)). OM(P) is estimated from the SOA partitioning model developed by Schell et al. (J. Geophys. Res. 2001, 106, 28275-28293 ) that has been used in a regional air quality model (CMAQ 4.7). OM(H) is predicted from the heterogeneous SOA model developed by Jang et al. (Environ. Sci. Technol. 2006, 40, 3013-3022 ). The SOA model is evaluated using a number of the experimental SOA data that are generated in a 2 m(3) indoor Teflon film chamber under various experimental conditions (e.g., humidity, inorganic seed compositions, NO(x) concentrations). The SOA model reasonably predicts not only the gas-phase chemistry, such as the ozone formation, the conversion of NO to NO(2), and the toluene decay, but also the SOA production. The model predicted that the OM(H) fraction of the total toluene SOA mass increases as NO(x) concentrations decrease: 0.73-0.83 at low NO(x) levels and 0.17-0.47 at middle and high NO(x) levels for SOA experiments with high initial toluene concentrations. Our study also finds a significant increase in the OM(H) mass fraction in the SOA generated with low initial toluene concentrations, compared to those with high initial toluene concentrations. On average, more than a 1-fold increase in OM(H) fraction is observed when the comparison is made between SOA experiments with 40 ppb toluene to those with 630 ppb toluene. Such an observation implies that heterogeneous reactions of the second-generation products of toluene oxidation can contribute considerably to the total SOA mass under atmospheric relevant conditions.

  5. Image-based modeling and characterization of RF ablation lesions in cardiac arrhythmia therapy

    NASA Astrophysics Data System (ADS)

    Linte, Cristian A.; Camp, Jon J.; Rettmann, Maryam E.; Holmes, David R.; Robb, Richard A.

    2013-03-01

    In spite of significant efforts to enhance guidance for catheter navigation, limited research has been conducted to consider the changes that occur in the tissue during ablation as means to provide useful feedback on the progression of therapy delivery. We propose a technique to visualize lesion progression and monitor the effects of the RF energy delivery using a surrogate thermal ablation model. The model incorporates both physical and physiological tissue parameters, and uses heat transfer principles to estimate temperature distribution in the tissue and geometry of the generated lesion in near real time. The ablation model has been calibrated and evaluated using ex vivo beef muscle tissue in a clinically relevant ablation protocol. To validate the model, the predicted temperature distribution was assessed against that measured directly using fiberoptic temperature probes inserted in the tissue. Moreover, the model-predicted lesions were compared to the lesions observed in the post-ablation digital images. Results showed an agreement within 5°C between the model-predicted and experimentally measured tissue temperatures, as well as comparable predicted and observed lesion characteristics and geometry. These results suggest that the proposed technique is capable of providing reasonably accurate and sufficiently fast representations of the created RF ablation lesions, to generate lesion maps in near real time. These maps can be used to guide the placement of successive lesions to ensure continuous and enduring suppression of the arrhythmic pathway.

  6. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  7. Next-Generation Machine Learning for Biological Networks.

    PubMed

    Camacho, Diogo M; Collins, Katherine M; Powers, Rani K; Costello, James C; Collins, James J

    2018-06-14

    Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  9. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  10. NEXT GENERATION MULTIMEDIA/MULTIPATHWAY EXPOSURE MODELING

    EPA Science Inventory

    The Stochastic Human Exposure and Dose Simulation model for pesticides (SHEDS-Pesticides) supports the efforts of EPA to better understand human exposures and doses to multimedia, multipathway pollutants. It is a physically-based, probabilistic computer model that predicts, for u...

  11. Rapid recipe formulation for plasma etching of new materials

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.

    2016-03-01

    A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.

  12. HPHT reservoir evolution: a case study from Jade and Judy fields, Central Graben, UK North Sea

    NASA Astrophysics Data System (ADS)

    di Primio, Rolando; Neumann, Volkmar

    2008-09-01

    3D basin modelling of a study area in Quadrant 30, UK North Sea was performed in order to elucidate the burial, thermal, pressure and hydrocarbon generation, migration and accumulation history in the Jurassic and Triassic high pressure high temperature sequences. Calibration data, including reservoir temperatures, pressures, petroleum compositional data, vitrinite reflectance profiles and published fluid inclusion data were used to constrain model predictions. The comparison of different pressure generating processes indicated that only when gas generation is taken into account as a pressure generating mechanism, both the predicted present day as well as palaeo-pressure evolution matches the available calibration data. Compositional modelling of hydrocarbon generation, migration and accumulation also reproduced present and palaeo bulk fluid properties such as the reservoir fluid gas to oil ratios. The reconstruction of the filling histories of both reservoirs indicates that both were first charged around 100 Ma ago and contained initially a two-phase system in which gas dominated volumetrically. Upon burial reservoir fluid composition evolved to higher GORs and became undersaturated as a function of increasing pore pressure up to the present day situation. Our results indicate that gas compositions must be taken into account when calculating the volumetric effect of gas generation on overpressure.

  13. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  14. Neonatal Candidiasis: Epidemiology, Risk Factors, and Clinical Judgment

    PubMed Central

    Benjamin, Daniel K.; Stoll, Barbara J.; Gantz, Marie G.; Walsh, Michele C.; Sanchez, Pablo J.; Das, Abhik; Shankaran, Seetha; Higgins, Rosemary D.; Auten, Kathy J.; Miller, Nancy A.; Walsh, Thomas J.; Laptook, Abbot R.; Carlo, Waldemar A.; Kennedy, Kathleen A.; Finer, Neil N.; Duara, Shahnaz; Schibler, Kurt; Chapman, Rachel L.; Van Meurs, Krisa P.; Frantz, Ivan D.; Phelps, Dale L.; Poindexter, Brenda B.; Bell, Edward F.; O’Shea, T. Michael; Watterberg, Kristi L.; Goldberg, Ronald N.

    2011-01-01

    OBJECTIVE Invasive candidiasis is a leading cause of infection-related morbidity and mortality in extremely low-birth-weight (<1000 g) infants. We quantify risk factors predicting infection in high-risk premature infants and compare clinical judgment with a prediction model of invasive candidiasis. METHODS The study involved a prospective observational cohort of infants <1000 g birth weight at 19 centers of the NICHD Neonatal Research Network. At each sepsis evaluation, clinical information was recorded, cultures obtained, and clinicians prospectively recorded their estimate of the probability of invasive candidiasis. Two models were generated with invasive candidiasis as their outcome: 1) potentially modifiable risk factors and 2) a clinical model at time of blood culture to predict candidiasis. RESULTS Invasive candidiasis occurred in 137/1515 (9.0%) infants and was documented by positive culture from ≥ 1 of these sources: blood (n=96), cerebrospinal fluid (n=9), urine obtained by catheterization (n=52), or other sterile body fluid (n=10). Mortality was not different from infants who had positive blood culture compared to those with isolated positive urine culture. Incidence varied from 2–28% at the 13 centers enrolling ≥ 50 infants. Potentially modifiable risk factors (model 1) included central catheter, broad-spectrum antibiotics (e.g., third-generation cephalosporins), intravenous lipid emulsion, endotracheal tube, and antenatal antibiotics. The clinical prediction model (model 2) had an area under the receiver operating characteristic curve of 0.79, and was superior to clinician judgment (0.70) in predicting subsequent invasive candidiasis. Performance of clinical judgment did not vary significantly with level of training. CONCLUSION Prior antibiotics, presence of a central catheter, endotracheal tube, and center were strongly associated with invasive candidiasis. Modeling was more accurate in predicting invasive candidiasis than clinical judgment. PMID:20876174

  15. Generation, estimation, utilization, availability and compatibility aspects of geodetic and meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luetzow, H.B.v.

    1983-08-01

    Following an introduction, the paper discusses in section 2 the collection or generation of final geodetic data from conventional surveys, satellite observations, satellite altimetry, the Global Positioning System, and moving base gravity gradiometers. Section 3 covers data utilization and accuracy aspects including gravity programmed inertial positioning and subterraneous mass detection. Section 4 addresses the usefulness and limitation of the collocation method of physical geodesy. Section 5 is concerned with the computation of classical climatological data. In section 6, meteorological data assimilation is considered. Section 7 deals with correlated aspects of initial data generation with emphasis on initial wind field determination,more » parameterized and classical hydrostatic prediction models, non-hydrostatic prediction, computational networks, and computer capacity. The paper concludes that geodetic and meteorological data are expected to become increasingly more diversified and voluminous both regionally and globally, that its general availability will be more or less restricted for some time to come, that its quality and quantity are subject to change, and that meteorological data generation, accuracy and density have to be considered in conjunction with advanced as well as cost-effective numerical weather prediction models and associated computational efforts.« less

  16. An implementation of an aeroacoustic prediction model for broadband noise from a vertical axis wind turbine using a CFD informed methodology

    NASA Astrophysics Data System (ADS)

    Botha, J. D. M.; Shahroki, A.; Rice, H.

    2017-12-01

    This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.

  17. Next Generation Community Based Unified Global Modeling System Development and Operational Implementation Strategies at NCEP

    NASA Astrophysics Data System (ADS)

    Tallapragada, V.

    2017-12-01

    NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.

  18. Accounting for spatial variation of trabecular anisotropy with subject-specific finite element modeling moderately improves predictions of local subchondral bone stiffness at the proximal tibia.

    PubMed

    Nazemi, S Majid; Kalajahi, S Mehrdad Hosseini; Cooper, David M L; Kontulainen, Saija A; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-07-05

    Previously, a finite element (FE) model of the proximal tibia was developed and validated against experimentally measured local subchondral stiffness. This model indicated modest predictions of stiffness (R 2 =0.77, normalized root mean squared error (RMSE%)=16.6%). Trabecular bone though was modeled with isotropic material properties despite its orthotropic anisotropy. The objective of this study was to identify the anisotropic FE modeling approach which best predicted (with largest explained variance and least amount of error) local subchondral bone stiffness at the proximal tibia. Local stiffness was measured at the subchondral surface of 13 medial/lateral tibial compartments using in situ macro indentation testing. An FE model of each specimen was generated assuming uniform anisotropy with 14 different combinations of cortical- and tibial-specific density-modulus relationships taken from the literature. Two FE models of each specimen were also generated which accounted for the spatial variation of trabecular bone anisotropy directly from clinical CT images using grey-level structure tensor and Cowin's fabric-elasticity equations. Stiffness was calculated using FE and compared to measured stiffness in terms of R 2 and RMSE%. The uniform anisotropic FE model explained 53-74% of the measured stiffness variance, with RMSE% ranging from 12.4 to 245.3%. The models which accounted for spatial variation of trabecular bone anisotropy predicted 76-79% of the variance in stiffness with RMSE% being 11.2-11.5%. Of the 16 evaluated finite element models in this study, the combination of Synder and Schneider (for cortical bone) and Cowin's fabric-elasticity equations (for trabecular bone) best predicted local subchondral bone stiffness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Luis; Marchante, Ruth; Cony, Marco

    2010-10-15

    Due to strong increase of solar power generation, the predictions of incoming solar energy are acquiring more importance. Photovoltaic and solar thermal are the main sources of electricity generation from solar energy. In the case of solar thermal energy plants with storage energy system, its management and operation need reliable predictions of solar irradiance with the same temporal resolution as the temporal capacity of the back-up system. These plants can work like a conventional power plant and compete in the energy stock market avoiding intermittence in electricity production. This work presents a comparisons of statistical models based on time seriesmore » applied to predict half daily values of global solar irradiance with a temporal horizon of 3 days. Half daily values consist of accumulated hourly global solar irradiance from solar raise to solar noon and from noon until dawn for each day. The dataset of ground solar radiation used belongs to stations of Spanish National Weather Service (AEMet). The models tested are autoregressive, neural networks and fuzzy logic models. Due to the fact that half daily solar irradiance time series is non-stationary, it has been necessary to transform it to two new stationary variables (clearness index and lost component) which are used as input of the predictive models. Improvement in terms of RMSD of the models essayed is compared against the model based on persistence. The validation process shows that all models essayed improve persistence. The best approach to forecast half daily values of solar irradiance is neural network models with lost component as input, except Lerida station where models based on clearness index have less uncertainty because this magnitude has a linear behaviour and it is easier to simulate by models. (author)« less

  20. Covariant spectator theory of np scattering: Deuteron quadrupole moment

    DOE PAGES

    Gross, Franz

    2015-01-26

    The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less

  1. Factors Associated with Postoperative Diabetes Insipidus after Pituitary Surgery.

    PubMed

    Faltado, Antonio L; Macalalad-Josue, Anna Angelica; Li, Ralph Jason S; Quisumbing, John Paul M; Yu, Marc Gregory Y; Jimeno, Cecilia A

    2017-12-01

    Determining risk factors for diabetes insipidus (DI) after pituitary surgery is important in improving patient care. Our objective is to determine the factors associated with DI after pituitary surgery. We reviewed records of patients who underwent pituitary surgery from 2011 to 2015 at Philippine General Hospital. Patients with preoperative DI were excluded. Multiple logistic regression analysis was performed and a predictive model was generated. The discrimination abilities of the predictive model and individual variables were assessed using the receiving operator characteristic curve. A total of 230 patients were included. The rate of postoperative DI was 27.8%. Percent change in serum Na (odds ratio [OR], 1.39; 95% confidence interval [CI], 1.15 to 1.69); preoperative serum Na (OR, 1.19; 95% CI, 1.02 to 1.40); and performance of craniotomy (OR, 5.48; 95% CI, 1.60 to 18.80) remained significantly associated with an increased incidence of postoperative DI, while percent change in urine specific gravity (USG) (OR, 0.53; 95% CI, 0.33 to 0.87) and meningioma on histopathology (OR, 0.05; 95% CI, 0.04 to 0.70) were significantly associated with a decreased incidence. The predictive model generated has good diagnostic accuracy in predicting postoperative DI with an area under curve of 0.83. Greater percent change in serum Na, preoperative serum Na, and performance of craniotomy significantly increased the likelihood of postoperative DI while percent change in USG and meningioma on histopathology were significantly associated with a decreased incidence. The predictive model can be used to generate a scoring system in estimating the risk of postoperative DI. Copyright © 2017 Korean Endocrine Society

  2. A Deep Learning Approach to LIBS Spectroscopy for Planetary Applications

    NASA Astrophysics Data System (ADS)

    Mullen, T. H.; Parente, M.; Gemp, I.; Dyar, M. D.

    2017-12-01

    The ChemCam instrument on the Curiousity rover has collected >440,000 laser-induced breakdown spectra (LIBS) from 1500 different geological targets since 2012. The team is using a pipeline of preprocessing and partial least squares techniques to predict compositions of surface materials [1]. Unfortunately, such multivariate techniques are plagued by hard-to-meet assumptions involving constant hyperparameter tuning to specific elements and the amount of training data available; if the whole distribution of data is not seen, the method will overfit to the training data and generalizability will suffer. The rover only has 10 calibration targets on-board that represent a small subset of the geochemical samples the rover is expected to investigate. Deep neural networks have been used to bypass these issues in other fields. Semi-supervised techniques allow researchers to utilized small labeled datasets and vast amounts of unlabeled data. One example is the variational autoencoder model, a semi-supervised generative model in the form of a deep neural network. The autoencoder assumes that LIBS spectra are generated from a distribution conditioned on the elemental compositions in the sample and some nuisance. The system is broken into two models: one that predicts elemental composition from the spectra and one that generates spectra from compositions that may or may not be seen in the training set. The synthesized spectra show strong agreement with geochemical conventions to express specific compositions. The predictions of composition show improved generalizability to PLS. Deep neural networks have also been used to transfer knowledge from one dataset to another to solve unlabeled data problems. Given that vast amounts of laboratry LIBS spectra have been obtained in the past few years, it is now feasible train a deep net to predict elemental composition from lab spectra. Transfer learning (manifold alignment or calibration transfer) [2] is then used to fine-tune the model from terrestrial lab data to Martian field data. Neural networks and generative models provide the flexibility need for elemental composition prediction and unseen spectra synthesis. [1] Clegg S. et al. (2016) Spectrochim. Acta B, 129, 64-85. [2] Boucher T. et al. (2017) J. Chemom., 31, e2877.

  3. Using Apex To Construct CPM-GOMS Models

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2006-01-01

    process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.

  4. Prediction of household and commercial BMW generation according to socio-economic and other factors for the Dublin region.

    PubMed

    Purcell, M; Magette, W L

    2009-04-01

    Both planning and design of integrated municipal solid waste management systems require accurate prediction of waste generation. This research predicted the quantity and distribution of biodegradable municipal waste (BMW) generation within a diverse 'landscape' of residential areas, as well as from a variety of commercial establishments (restaurants, hotels, hospitals, etc.) in the Dublin (Ireland) region. Socio-economic variables, housing types, and the sizes and main activities of commercial establishments were hypothesized as the key determinants contributing to the spatial variability of BMW generation. A geographical information system (GIS) 'model' of BMW generation was created using ArcMap, a component of ArcGIS 9. Statistical data including socio-economic status and household size were mapped on an electoral district basis. Historical research and data from scientific literature were used to assign BMW generation rates to residential and commercial establishments. These predictions were combined to give overall BMW estimates for the region, which can aid waste planning and policy decisions. This technique will also aid the design of future waste management strategies, leading to policy and practice alterations as a function of demographic changes and development. The household prediction technique gave a more accurate overall estimate of household waste generation than did the social class technique. Both techniques produced estimates that differed from the reported local authority data; however, given that local authority reported figures for the region are below the national average, with some of the waste generated from apartment complexes being reported as commercial waste, predictions arising from this research are believed to be closer to actual waste generation than a comparison to reported data would suggest. By changing the input data, this estimation tool can be adapted for use in other locations. Although focusing on waste in the Dublin region, this method of waste prediction can have significant potential benefits if a universal method can be found to apply it effectively.

  5. Evaluating approaches to find exon chains based on long reads.

    PubMed

    Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli

    2018-05-01

    Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.

  6. Multi-Temporal Decomposed Wind and Load Power Models for Electric Energy Systems

    NASA Astrophysics Data System (ADS)

    Abdel-Karim, Noha

    This thesis is motivated by the recognition that sources of uncertainties in electric power systems are multifold and may have potentially far-reaching effects. In the past, only system load forecast was considered to be the main challenge. More recently, however, the uncertain price of electricity and hard-to-predict power produced by renewable resources, such as wind and solar, are making the operating and planning environment much more challenging. The near-real-time power imbalances are compensated by means of frequency regulation and generally require fast-responding costly resources. Because of this, a more accurate forecast and look-ahead scheduling would result in a reduced need for expensive power balancing. Similarly, long-term planning and seasonal maintenance need to take into account long-term demand forecast as well as how the short-term generation scheduling is done. The better the demand forecast, the more efficient planning will be as well. Moreover, computer algorithms for scheduling and planning are essential in helping the system operators decide what to schedule and planners what to build. This is needed given the overall complexity created by different abilities to adjust the power output of generation technologies, demand uncertainties and by the network delivery constraints. Given the growing presence of major uncertainties, it is likely that the main control applications will use more probabilistic approaches. Today's predominantly deterministic methods will be replaced by methods which account for key uncertainties as decisions are made. It is well-understood that although demand and wind power cannot be predicted at very high accuracy, taking into consideration predictions and scheduling in a look-ahead way over several time horizons generally results in more efficient and reliable utilization, than when decisions are made assuming deterministic, often worst-case scenarios. This change is in approach is going to ultimately require new electricity market rules capable of providing the right incentives to manage uncertainties and of differentiating various technologies according to the rate at which they can respond to ever changing conditions. Given the overall need for modeling uncertainties in electric energy systems, we consider in this thesis the problem of multi-temporal modeling of wind and demand power, in particular. Historic data is used to derive prediction models for several future time horizons. Short-term prediction models derived can be used for look-ahead economic dispatch and unit commitment, while the long-term annual predictive models can be used for investment planning. As expected, the accuracy of such predictive models depends on the time horizons over which the predictions are made, as well as on the nature of uncertain signals. It is shown that predictive models obtained using the same general modeling approaches result in different accuracy for wind than for demand power. In what follows, we introduce several models which have qualitatively different patterns, ranging from hourly to annual. We first transform historic time-stamped data into the Fourier Transform (Fr) representation. The frequency domain data representation is used to decompose the wind and load power signals and to derive predictive models relevant for short-term and long-term predictions using extracted spectral techniques. The short-term results are interpreted next as a Linear Prediction Coding Model (LPC) and its accuracy is analyzed. Next, a new Markov-Based Sensitivity Model (MBSM) for short term prediction has been proposed and the dispatched costs of uncertainties for different predictive models with comparisons have been developed. Moreover, the Discrete Markov Process (DMP) representation is applied to help assess probabilities of most likely short-, medium- and long-term states and the related multi-temporal risks. In addition, this thesis discusses operational impacts of wind power integration in different scenario levels by performing more than 9,000 AC Optimal Power Flow runs. The effects of both wind and load variations on system constraints and costs are presented. The limitations of DC Optimal Power Flow (DCOPF) vs. ACOPF are emphasized by means of system convergence problems due to the effect of wind power on changing line flows and net power injections. By studying the effect of having wind power on line flows, we found that the divergence problem applies in areas with high wind and hydro generation capacity share (cheap generations). (Abstract shortened by UMI.).

  7. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    PubMed

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  8. Integrated PK-PD and agent-based modeling in oncology.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S

    2015-04-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.

  9. Integrated PK-PD and Agent-Based Modeling in Oncology

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio

    2016-01-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379

  10. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    PubMed

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  11. Airframe Noise Sub-Component Definition and Model

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Sen, Rahul; Hardy, Bruce; Yamamoto, Kingo; Guo, Yue-Ping; Miller, Gregory

    2004-01-01

    Both in-house, and jointly with NASA under the Advanced Subsonic Transport (AST) program, Boeing Commerical Aircraft Company (BCA) had begun work on systematically identifying specific components of noise responsible for total airframe noise generation and applying the knowledge gained towards the creation of a model for airframe noise prediction. This report documents the continuation of the collection of database from model-scale and full-scale airframe noise measurements to compliment the earlier existing databases, the development of the subcomponent models and the generation of a new empirical prediction code. The airframe subcomponent data includes measurements from aircraft ranging in size from a Boeing 737 to aircraft larger than a Boeing 747 aircraft. These results provide the continuity to evaluate the technology developed under the AST program consistent with the guidelines set forth in NASA CR-198298.

  12. EOID Model Validation and Performance Prediction

    DTIC Science & Technology

    2002-09-30

    Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The two most prominent technologies in this area

  13. National Centers for Environmental Prediction

    Science.gov Websites

    Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Observing system Research and Predictability EXperiment (THORPEX) Targeted Obs Targeted Observations Cyclone University Research Court College Park, MD 20740 Page Author: EMC Webmaster Page generated:Sunday, 27-May

  14. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions

    PubMed Central

    Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.

    2012-01-01

    Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780

  15. Prediction of spectral acceleration response ordinates based on PGA attenuation

    USGS Publications Warehouse

    Graizer, V.; Kalkan, E.

    2009-01-01

    Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.

  16. A burnout prediction model based around char morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao Wu; Edward Lester; Michael Cloke

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less

  17. Parental, peer and school experiences as predictors of alcohol drinking among first and second generation immigrant adolescents in Israel.

    PubMed

    Walsh, Sophie D; Djalovski, Amir; Boniel-Nissim, Meyran; Harel-Fisch, Yossi

    2014-05-01

    Ecological perspectives stress the importance of environmental predictors of adolescent alcohol use, yet little research has examined such predictors among immigrant adolescents. This study examines parental, peer and school predictors of alcohol drinking (casual drinking, binge drinking and drunkenness) among Israeli-born adolescents and first and second generation adolescent immigrants from the Former Soviet Union (FSU) and Ethiopia in Israel. The study uses data from the 2010 to 2011 Israeli Health Behaviors of School age Children (HBSC) survey and includes a representative sample of 3059 adolescents, aged 11-17. Differences between the groups for drinking were examined using Pearson's chi square. Logistic regression models were used to examine group specific predictors of drinking. First generation FSU and both Ethiopian groups reported higher levels of binge drinking and drunkenness than Israeli-born adolescents. All immigrant groups reported lower levels of parental monitoring than native born adolescents; both first generation groups reported difficulties talking to parents; and first generation FSU and second generation Ethiopian adolescents reported greater time with friends. Group specific logistic regression models suggest that while parent, peer and school variables all predicted alcohol use among Israeli adolescents, only time spent with peers consistently predicted immigrant alcohol use. Findings highlight specific vulnerability of first generation FSU and second generation Ethiopian adolescents to high levels of drinking and the salience of time spent with peers as predicting immigrant adolescent drinking patterns. They suggest that drinking patterns must be understood in relation to country of origin and immigration experience of a particular group. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Influences of load characteristics on impaired control of grip forces in patients with cerebellar damage.

    PubMed

    Brandauer, B; Timmann, D; Häusler, A; Hermsdörfer, J

    2010-02-01

    Various studies showed a clear impairment of cerebellar patients to modulate grip force in anticipation of the loads resulting from movements with a grasped object. This failure corroborated the theory of internal feedforward models in the cerebellum. Cerebellar damage also impairs the coordination of multiple-joint movements and this has been related to deficient prediction and compensation of movement-induced torques. To study the effects of disturbed torque control on feedforward grip-force control, two self-generated load conditions with different demands on torque control-one with movement-induced and the other with isometrically generated load changes-were directly compared in patients with cerebellar degeneration. Furthermore the cerebellum is thought to be more involved in grip-force adjustment to self-generated loads than to externally generated loads. Consequently, an additional condition with externally generated loads was introduced to further test this hypothesis. Analysis of 23 patients with degenerative cerebellar damage revealed clear impairments in predictive feedforward mechanisms in the control of both self-generated load types. Besides feedforward control, the cerebellar damage also affected more reactive responses when the externally generated load destabilized the grip, although this impairment may vary with the type of load as suggested by control experiments. The present findings provide further support that the cerebellum plays a major role in predictive control mechanisms. However, this impact of the cerebellum does not strongly depend on the nature of the load and the specific internal forward model. Contributions to reactive (grip force) control are not negligible, but seem to be dependent on the physical characteristics of an externally generated load.

  19. Model Predictive Control-based Power take-off Control of an Oscillating Water Column Wave Energy Conversion System

    NASA Astrophysics Data System (ADS)

    Rajapakse, G.; Jayasinghe, S. G.; Fleming, A.; Shahnia, F.

    2017-07-01

    Australia’s extended coastline asserts abundance of wave and tidal power. The predictability of these energy sources and their proximity to cities and towns make them more desirable. Several tidal current turbine and ocean wave energy conversion projects have already been planned in the coastline of southern Australia. Some of these projects use air turbine technology with air driven turbines to harvest the energy from an oscillating water column. This study focuses on the power take-off control of a single stage unidirectional oscillating water column air turbine generator system, and proposes a model predictive control-based speed controller for the generator-turbine assembly. The proposed method is verified with simulation results that show the efficacy of the controller in extracting power from the turbine while maintaining the speed at the desired level.

  20. SAbPred: a structure-based antibody prediction server

    PubMed Central

    Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.

    2016-01-01

    SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379

  1. MODELING ENVIRONMENTAL EXPOSURES TO PARTICULATE MATTER AND PESTICIDES

    EPA Science Inventory

    This presentation describes initial results from on-going research at EPA on modeling human exposures to particulate matter and residential pesticides. A first generation probabilistic population exposure model for Particulate Matter (PM), specifically for predicting PM1o and P...

  2. Preface to the special volume on the second Sandia Fracture Challenge

    DOE PAGES

    Kramer, Sharlotte Lorraine Bolyard; Boyce, Brad

    2016-01-01

    In this study, ductile failure of structural metals is a pervasive issue for applications such as automotive manufacturing, transportation infrastructures, munitions and armor, and energy generation. Experimental investigation of all relevant failure scenarios is intractable, requiring reliance on computation models. Our confidence in model predictions rests on unbiased assessments of the entire predictive capability, including the mathematical formulation, numerical implementation, calibration, and execution.

  3. Comparison of two gas chromatograph models and analysis of binary data

    NASA Technical Reports Server (NTRS)

    Keba, P. S.; Woodrow, P. T.

    1972-01-01

    The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.

  4. Hemolytic potential of hydrodynamic cavitation.

    PubMed

    Chambers, S D; Bartlett, R H; Ceccio, S L

    2000-08-01

    The purpose of this study was to determine the hemolytic potentials of discrete bubble cavitation and attached cavitation. To generate controlled cavitation events, a venturigeometry hydrodynamic device, called a Cavitation Susceptibility Meter (CSM), was constructed. A comparison between the hemolytic potential of discrete bubble cavitation and attached cavitation was investigated with a single-pass flow apparatus and a recirculating flow apparatus, both utilizing the CSM. An analytical model, based on spherical bubble dynamics, was developed for predicting the hemolysis caused by discrete bubble cavitation. Experimentally, discrete bubble cavitation did not correlate with a measurable increase in plasma-free hemoglobin (PFHb), as predicted by the analytical model. However, attached cavitation did result in significant PFHb generation. The rate of PFHb generation scaled inversely with the Cavitation number at a constant flow rate, suggesting that the size of the attached cavity was the dominant hemolytic factor.

  5. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    NASA Astrophysics Data System (ADS)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  6. Predator-induced phenotypic plasticity within- and across-generations: a challenge for theory?

    PubMed Central

    Walsh, Matthew R.; Cooley, Frank; Biles, Kelsey; Munch, Stephan B.

    2015-01-01

    Much work has shown that the environment can induce non-genetic changes in phenotype that span multiple generations. Theory predicts that predictable environmental variation selects for both increased within- and across-generation responses. Yet, to the best of our knowledge, there are no empirical tests of this prediction. We explored the relationship between within- versus across-generation plasticity by evaluating the influence of predator cues on the life-history traits of Daphnia ambigua. We measured the duration of predator-induced transgenerational effects, determined when transgenerational responses are induced, and quantified the cues that activate transgenerational plasticity. We show that predator exposure during embryonic development causes earlier maturation and increased reproductive output. Such effects are detectable two generations removed from predator exposure and are similar in magnitude in response to exposure to cues emitted by injured conspecifics. Moreover, all experimental contexts and traits yielded a negative correlation between within- versus across-generation responses. That is, responses to predator cues within- and across-generations were opposite in sign and magnitude. Although many models address transgenerational plasticity, none of them explain this apparent negative relationship between within- and across-generation plasticities. Our results highlight the need to refine the theory of transgenerational plasticity. PMID:25392477

  7. Seqping: gene prediction pipeline for plant genomes using self-training gene models and transcriptomic data.

    PubMed

    Chan, Kuang-Lim; Rosli, Rozana; Tatarinova, Tatiana V; Hogan, Michael; Firdaus-Raih, Mohd; Low, Eng-Ti Leslie

    2017-01-27

    Gene prediction is one of the most important steps in the genome annotation process. A large number of software tools and pipelines developed by various computing techniques are available for gene prediction. However, these systems have yet to accurately predict all or even most of the protein-coding regions. Furthermore, none of the currently available gene-finders has a universal Hidden Markov Model (HMM) that can perform gene prediction for all organisms equally well in an automatic fashion. We present an automated gene prediction pipeline, Seqping that uses self-training HMM models and transcriptomic data. The pipeline processes the genome and transcriptome sequences of the target species using GlimmerHMM, SNAP, and AUGUSTUS pipelines, followed by MAKER2 program to combine predictions from the three tools in association with the transcriptomic evidence. Seqping generates species-specific HMMs that are able to offer unbiased gene predictions. The pipeline was evaluated using the Oryza sativa and Arabidopsis thaliana genomes. Benchmarking Universal Single-Copy Orthologs (BUSCO) analysis showed that the pipeline was able to identify at least 95% of BUSCO's plantae dataset. Our evaluation shows that Seqping was able to generate better gene predictions compared to three HMM-based programs (MAKER2, GlimmerHMM and AUGUSTUS) using their respective available HMMs. Seqping had the highest accuracy in rice (0.5648 for CDS, 0.4468 for exon, and 0.6695 nucleotide structure) and A. thaliana (0.5808 for CDS, 0.5955 for exon, and 0.8839 nucleotide structure). Seqping provides researchers a seamless pipeline to train species-specific HMMs and predict genes in newly sequenced or less-studied genomes. We conclude that the Seqping pipeline predictions are more accurate than gene predictions using the other three approaches with the default or available HMMs.

  8. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    PubMed

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  9. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    PubMed Central

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  10. Protein (multi-)location prediction: utilizing interdependencies via a generative model.

    PubMed

    Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit

    2015-06-15

    Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.

  11. North Indian heavy rainfall event during June 2013: diagnostics and extended range prediction

    NASA Astrophysics Data System (ADS)

    Joseph, Susmitha; Sahai, A. K.; Sharmila, S.; Abhilash, S.; Borah, N.; Chattopadhyay, R.; Pillai, P. A.; Rajeevan, M.; Kumar, Arun

    2015-04-01

    The Indian summer monsoon of 2013 covered the entire country by 16 June, one month earlier than its normal date. Around that period, heavy rainfall was experienced in the north Indian state of Uttarakhand, which is situated on the southern slope of Himalayan Ranges. The heavy rainfall and associated landslides caused serious damages and claimed many lives. This study investigates the scientific rationale behind the incidence of the extreme rainfall event in the backdrop of large scale monsoon environment. It is found that a monsoonal low pressure system that provided increased low level convergence and abundant moisture, and a midlatitude westerly trough that generated strong upper level divergence, interacted with each other and helped monsoon to cover the entire country and facilitated the occurrence of the heavy rainfall event in the orographic region. The study also examines the skill of an ensemble prediction system (EPS) in predicting the Uttarakhand event on extended range time scale. The EPS is implemented on both high (T382) and low (T126) resolution versions of the coupled general circulation model CFSv2. Although the models predicted the event 10-12 days in advance, they failed to predict the midlatitude influence on the event. Possible reasons for the same are also discussed. In both resolutions of the model, the event was triggered by the generation and northwestward movement of a low pressure system developed over the Bay of Bengal. The study advocates the usefulness of high resolution models in predicting extreme events.

  12. In vitro differential diagnosis of clavus and verruca by a predictive model generated from electrical impedance.

    PubMed

    Hung, Chien-Ya; Sun, Pei-Lun; Chiang, Shu-Jen; Jaw, Fu-Shan

    2014-01-01

    Similar clinical appearances prevent accurate diagnosis of two common skin diseases, clavus and verruca. In this study, electrical impedance is employed as a novel tool to generate a predictive model for differentiating these two diseases. We used 29 clavus and 28 verruca lesions. To obtain impedance parameters, a LCR-meter system was applied to measure capacitance (C), resistance (Re), impedance magnitude (Z), and phase angle (θ). These values were combined with lesion thickness (d) to characterize the tissue specimens. The results from clavus and verruca were then fitted to a univariate logistic regression model with the generalized estimating equations (GEE) method. In model generation, log ZSD and θSD were formulated as predictors by fitting a multiple logistic regression model with the same GEE method. The potential nonlinear effects of covariates were detected by fitting generalized additive models (GAM). Moreover, the model was validated by the goodness-of-fit (GOF) assessments. Significant mean differences of the index d, Re, Z, and θ are found between clavus and verruca (p<0.001). A final predictive model is established with Z and θ indices. The model fits the observed data quite well. In GOF evaluation, the area under the receiver operating characteristics (ROC) curve is 0.875 (>0.7), the adjusted generalized R2 is 0.512 (>0.3), and the p value of the Hosmer-Lemeshow GOF test is 0.350 (>0.05). This technique promises to provide an approved model for differential diagnosis of clavus and verruca. It could provide a rapid, relatively low-cost, safe and non-invasive screening tool in clinic use.

  13. Predicting the risk of suicide by analyzing the text of clinical notes.

    PubMed

    Poulin, Chris; Shiner, Brian; Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients.

  14. Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes

    PubMed Central

    Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients. PMID:24489669

  15. A model for predicting Xanthomonas arboricola pv. pruni growth as a function of temperature

    PubMed Central

    Llorente, Isidre; Montesinos, Emilio; Moragrega, Concepció

    2017-01-01

    A two-step modeling approach was used for predicting the effect of temperature on the growth of Xanthomonas arboricola pv. pruni, causal agent of bacterial spot disease of stone fruit. The in vitro growth of seven strains was monitored at temperatures from 5 to 35°C with a Bioscreen C system, and a calibrating equation was generated for converting optical densities to viable counts. In primary modeling, Baranyi, Buchanan, and modified Gompertz equations were fitted to viable count growth curves over the entire temperature range. The modified Gompertz model showed the best fit to the data, and it was selected to estimate the bacterial growth parameters at each temperature. Secondary modeling of maximum specific growth rate as a function of temperature was performed by using the Ratkowsky model and its variations. The modified Ratkowsky model showed the best goodness of fit to maximum specific growth rate estimates, and it was validated successfully for the seven strains at four additional temperatures. The model generated in this work will be used for predicting temperature-based Xanthomonas arboricola pv. pruni growth rate and derived potential daily doublings, and included as the inoculum potential component of a bacterial spot of stone fruit disease forecaster. PMID:28493954

  16. Building a three-dimensional model of CYP2C9 inhibition using the Autocorrelator: an autonomous model generator.

    PubMed

    Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto

    2012-05-25

    In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.

  17. A Template-Based Protein Structure Reconstruction Method Using Deep Autoencoder Learning.

    PubMed

    Li, Haiou; Lyu, Qiang; Cheng, Jianlin

    2016-12-01

    Protein structure prediction is an important problem in computational biology, and is widely applied to various biomedical problems such as protein function study, protein design, and drug design. In this work, we developed a novel deep learning approach based on a deeply stacked denoising autoencoder for protein structure reconstruction. We applied our approach to a template-based protein structure prediction using only the 3D structural coordinates of homologous template proteins as input. The templates were identified for a target protein by a PSI-BLAST search. 3DRobot (a program that automatically generates diverse and well-packed protein structure decoys) was used to generate initial decoy models for the target from the templates. A stacked denoising autoencoder was trained on the decoys to obtain a deep learning model for the target protein. The trained deep model was then used to reconstruct the final structural model for the target sequence. With target proteins that have highly similar template proteins as benchmarks, the GDT-TS score of the predicted structures is greater than 0.7, suggesting that the deep autoencoder is a promising method for protein structure reconstruction.

  18. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers arc able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  19. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers are able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  20. Racial and Cultural Factors Affecting the Mental Health of Asian Americans

    PubMed Central

    Miller, Matthew J.; Yang, Minji; Farrell, Jerome A.; Lin, Li-Ling

    2011-01-01

    In this study, we employed structural equation modeling to test the degree to which racism-related stress, acculturative stress, and bicultural self-efficacy were predictive of mental health in a predominantly community-based sample of 367 Asian American adults. We also tested whether bicultural self-efficacy moderated the relationship between acculturative stress and mental health. Finally, we examined whether generational status moderated the impact of racial and cultural predictors of mental health by testing our model across immigrant and U.S.-born samples. Results indicated that our hypothesized structural model represented a good fit to the total sample data. While racism-related stress, acculturative stress, and bicultural self-efficacy were significant predictors of mental health in the total sample analyses, our generational analyses revealed a differential predictive pattern across generational status. Finally, we found that the buffering effect of bicultural self-efficacy on the relationship between acculturative stress and mental health was significant for U.S.-born individuals only. Implications for research and service delivery are explored. PMID:21977934

  1. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  2. Squared exponential covariance function for prediction of hydrocarbon in seabed logging application

    NASA Astrophysics Data System (ADS)

    Mukhtar, Siti Mariam; Daud, Hanita; Dass, Sarat Chandra

    2016-11-01

    Seabed Logging technology (SBL) has progressively emerged as one of the demanding technologies in Exploration and Production (E&P) industry. Hydrocarbon prediction in deep water areas is crucial task for a driller in any oil and gas company as drilling cost is very expensive. Simulation data generated by Computer Software Technology (CST) is used to predict the presence of hydrocarbon where the models replicate real SBL environment. These models indicate that the hydrocarbon filled reservoirs are more resistive than surrounding water filled sediments. Then, as hydrocarbon depth is increased, it is more challenging to differentiate data with and without hydrocarbon. MATLAB is used for data extractions for curve fitting process using Gaussian process (GP). GP can be classified into regression and classification problems, where this work only focuses on Gaussian process regression (GPR) problem. Most popular choice to supervise GPR is squared exponential (SE), as it provides stability and probabilistic prediction in huge amounts of data. Hence, SE is used to predict the presence or absence of hydrocarbon in the reservoir from the data generated.

  3. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  4. The Role of Teamwork in the Analysis of Big Data: A Study of Visual Analytics and Box Office Prediction.

    PubMed

    Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy

    2017-03-01

    Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.

  5. Object-oriented regression for building predictive models with high dimensional omics data from translational studies.

    PubMed

    Zhao, Lue Ping; Bolouri, Hamid

    2016-04-01

    Maturing omics technologies enable researchers to generate high dimension omics data (HDOD) routinely in translational clinical studies. In the field of oncology, The Cancer Genome Atlas (TCGA) provided funding support to researchers to generate different types of omics data on a common set of biospecimens with accompanying clinical data and has made the data available for the research community to mine. One important application, and the focus of this manuscript, is to build predictive models for prognostic outcomes based on HDOD. To complement prevailing regression-based approaches, we propose to use an object-oriented regression (OOR) methodology to identify exemplars specified by HDOD patterns and to assess their associations with prognostic outcome. Through computing patient's similarities to these exemplars, the OOR-based predictive model produces a risk estimate using a patient's HDOD. The primary advantages of OOR are twofold: reducing the penalty of high dimensionality and retaining the interpretability to clinical practitioners. To illustrate its utility, we apply OOR to gene expression data from non-small cell lung cancer patients in TCGA and build a predictive model for prognostic survivorship among stage I patients, i.e., we stratify these patients by their prognostic survival risks beyond histological classifications. Identification of these high-risk patients helps oncologists to develop effective treatment protocols and post-treatment disease management plans. Using the TCGA data, the total sample is divided into training and validation data sets. After building up a predictive model in the training set, we compute risk scores from the predictive model, and validate associations of risk scores with prognostic outcome in the validation data (P-value=0.015). Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Object-Oriented Regression for Building Predictive Models with High Dimensional Omics Data from Translational Studies

    PubMed Central

    Zhao, Lue Ping; Bolouri, Hamid

    2016-01-01

    Maturing omics technologies enable researchers to generate high dimension omics data (HDOD) routinely in translational clinical studies. In the field of oncology, The Cancer Genome Atlas (TCGA) provided funding support to researchers to generate different types of omics data on a common set of biospecimens with accompanying clinical data and to make the data available for the research community to mine. One important application, and the focus of this manuscript, is to build predictive models for prognostic outcomes based on HDOD. To complement prevailing regression-based approaches, we propose to use an object-oriented regression (OOR) methodology to identify exemplars specified by HDOD patterns and to assess their associations with prognostic outcome. Through computing patient’s similarities to these exemplars, the OOR-based predictive model produces a risk estimate using a patient’s HDOD. The primary advantages of OOR are twofold: reducing the penalty of high dimensionality and retaining the interpretability to clinical practitioners. To illustrate its utility, we apply OOR to gene expression data from non-small cell lung cancer patients in TCGA and build a predictive model for prognostic survivorship among stage I patients, i.e., we stratify these patients by their prognostic survival risks beyond histological classifications. Identification of these high-risk patients helps oncologists to develop effective treatment protocols and post-treatment disease management plans. Using the TCGA data, the total sample is divided into training and validation data sets. After building up a predictive model in the training set, we compute risk scores from the predictive model, and validate associations of risk scores with prognostic outcome in the validation data (p=0.015). PMID:26972839

  7. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  8. Breakpoint-forced and bound long waves in the nearshore: A model comparison

    USGS Publications Warehouse

    List, Jeffrey H.; ,

    1993-01-01

    A finite-difference model is used to compare long wave amplitudes arising from two-group forced generation mechanisms in the nearshore: long waves generated at a time-varying breakpoint and the shallow-water extension of the bound long wave. Plane beach results demonstrate that the strong frequency selection in the outgoing wave predicted by the breakpoint-forcing mechanism may not be observable in field data due to this wave's relatively small size and its predicted phase relation with the bound wave. Over a bar/trough nearshore, it is shown that a strong frequency selection in shoreline amplitudes is not a unique result of the time-varying breakpoint model, but a general result of the interaction between topography and any broad-banded forcing of nearshore long waves.

  9. A first approximation kinetic model to predict methane generation from an oil sands tailings settling basin.

    PubMed

    Siddique, Tariq; Gupta, Rajender; Fedorak, Phillip M; MacKinnon, Michael D; Foght, Julia M

    2008-08-01

    A small fraction of the naphtha diluent used for oil sands processing escapes with tailings and supports methane (CH(4)) biogenesis in large anaerobic settling basins such as Mildred Lake Settling Basin (MLSB) in northern Alberta, Canada. Based on the rate of naphtha metabolism in tailings incubated in laboratory microcosms, a kinetic model comprising lag phase, rate of hydrocarbon metabolism and conversion to CH(4) was developed to predict CH(4) biogenesis and flux from MLSB. Zero- and first-order kinetic models, respectively predicted generation of 5.4 and 5.1 mmol CH(4) in naphtha-amended microcosms compared to 5.3 (+/-0.2) mmol CH(4) measured in microcosms during 46 weeks of incubation. These kinetic models also predicted well the CH(4) produced by tailings amended with either naphtha-range n-alkanes or BTEX compounds at concentrations similar to those expected in MLSB. Considering 25% of MLSB's 200 million m(3) tailings volume to be methanogenic, the zero- and first-order kinetic models applied over a wide range of naphtha concentrations (0.01-1.0 wt%) predicted production of 8.9-400 million l CH(4) day(-1) from MLSB, which exceeds the estimated production of 3-43 million l CH(4) day(-1). This discrepancy may result from heterogeneity and density of the tailings, presence of nutrients in the microcosms, and/or overestimation of the readily biodegradable fraction of the naphtha in MLSB tailings.

  10. Quantum theory of the electronic and optical properties of low-dimensional semiconductor systems

    NASA Astrophysics Data System (ADS)

    Lau, Wayne Heung

    This thesis examines the electronic and optical properties of low-dimensional semiconductor systems. A theory is developed to study the electron-hole generation-recombination process of type-II semimetallic semiconductor heterojunctions based on a 3 x 3 k·p matrix Hamiltonian (three-band model) and an 8 x 8 k·p matrix Hamiltonian (eight-band model). A novel electron-hole generation and recombination process, which is called activationless generation-recombination process, is predicted. It is demonstrated that the current through the type-II semimetallic semiconductor heterojunctions is governed by the activationless electron-hole generation-recombination process at the heterointerfaces, and that the current-voltage characteristics are essentially linear. A qualitative agreement between theory and experiments is observed. The numerical results of the eight-band model are compared with those of the threeband model. Based on a lattice gas model, a theory is developed to study the influence of a random potential on the ionization equilibrium conditions for bound electron-hole pairs (excitons) in III--V semiconductor heterostructures. It is demonstrated that ionization equilibrium conditions for bound electron-hole pairs change drastically in the presence of strong disorder. It is predicted that strong disorder promotes dissociation of excitons in III--V semiconductor heterostructures. A theory of polariton (photon dressed by phonon) spontaneous emission in a III--V semiconductor doped with semiconductor quantum dots (QDs) or quantum wells (QWs) is developed. For the first time, superradiant and subradiant polariton spontaneous emission phenomena in a polariton-QD (QW) coupled system are predicted when the resonance energies of the two identical QDs (QWs) lie outside the polaritonic energy gap. It is also predicted that when the resonance energies of the two identical QDs (QWs) lie inside the polaritonic energy gap, spontaneous emission of polariton in the polariton-QD (QW) coupled system is inhibited and polariton bound states are formed within the polaritonic energy gap. A theory is also developed to study the polariton eigenenergy spectrum, polariton effective mass, and polariton spectral density of N identical semiconductor QDs (QWs) or a superlattice (SL) placed inside a III--V semiconductor. A polariton-impurity band lying within the polaritonic energy gap of the III--V semiconductor is predicted when the resonance energies of the QDs (QWs) lie inside the polaritonic energy gap. Hole-like polariton effective mass of the polariton-impurity band is predicted. It is also predicted that the spectral density of the polariton has a Lorentzian shape if the resonance energies of the QDs (QWs) lie outside the polaritonic gap.

  11. Experimental Investigation and Modeling of Scale Effects in Micro Jet Pumps

    NASA Astrophysics Data System (ADS)

    Gardner, William Geoffrey

    2011-12-01

    Since the mid-1990s there has been an active effort to develop hydrocarbon-fueled power generation and propulsion systems on the scale of centimeters or smaller. This effort led to the creation and expansion of a field of research focused around the design and reduction to practice of Power MEMS (microelectromechanical systems) devices, beginning first with microscale jet engines and a generation later more broadly encompassing MEMS devices which generate power or pump heat. Due to small device scale and fabrication techniques, design constraints are highly coupled and conventional solutions for device requirements may not be practicable. This thesis describes the experimental investigation, modeling and potential applications for two classes of microscale jet pumps: jet ejectors and jet injectors. These components pump fluids with no moving parts and can be integrated into Power MEMS devices to satisfy pumping requirements by supplementing or replacing existing solutions. This thesis presents models developed from first principles which predict losses experienced at small length scales and agree well with experimental results. The models further predict maximum achievable power densities at the onset of detrimental viscous losses.

  12. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  13. Combinatorially-generated library of 6-fluoroquinolone analogs as potential novel antitubercular agents: a chemometric and molecular modeling assessment.

    PubMed

    Minovski, Nikola; Perdih, Andrej; Solmajer, Tom

    2012-05-01

    The virtual combinatorial chemistry approach as a methodology for generating chemical libraries of structurally-similar analogs in a virtual environment was employed for building a general mixed virtual combinatorial library with a total of 53.871 6-FQ structural analogs, introducing the real synthetic pathways of three well known 6-FQ inhibitors. The druggability properties of the generated combinatorial 6-FQs were assessed using an in-house developed drug-likeness filter integrating the Lipinski/Veber rule-sets. The compounds recognized as drug-like were used as an external set for prediction of the biological activity values using a neural-networks (NN) model based on an experimentally-determined set of active 6-FQs. Furthermore, a subset of compounds was extracted from the pool of drug-like 6-FQs, with predicted biological activity, and subsequently used in virtual screening (VS) campaign combining pharmacophore modeling and molecular docking studies. This complex scheme, a powerful combination of chemometric and molecular modeling approaches provided novel QSAR guidelines that could aid in the further lead development of 6-FQs agents.

  14. A real-time prediction model for post-irradiation malignant cervical lymph nodes.

    PubMed

    Lo, W-C; Cheng, P-W; Shueng, P-W; Hsieh, C-H; Chang, Y-L; Liao, L-J

    2018-04-01

    To establish a real-time predictive scoring model based on sonographic characteristics for identifying malignant cervical lymph nodes (LNs) in cancer patients after neck irradiation. One-hundred forty-four irradiation-treated patients underwent ultrasonography and ultrasound-guided fine-needle aspirations (USgFNAs), and the resultant data were used to construct a real-time and computerised predictive scoring model. This scoring system was further compared with our previously proposed prediction model. A predictive scoring model, 1.35 × (L axis) + 2.03 × (S axis) + 2.27 × (margin) + 1.48 × (echogenic hilum) + 3.7, was generated by stepwise multivariate logistic regression analysis. Neck LNs were considered to be malignant when the score was ≥ 7, corresponding to a sensitivity of 85.5%, specificity of 79.4%, positive predictive value (PPV) of 82.3%, negative predictive value (NPV) of 83.1%, and overall accuracy of 82.6%. When this new model and the original model were compared, the areas under the receiver operating characteristic curve (c-statistic) were 0.89 and 0.81, respectively (P < .05). A real-time sonographic predictive scoring model was constructed to provide prompt and reliable guidance for USgFNA biopsies to manage cervical LNs after neck irradiation. © 2017 John Wiley & Sons Ltd.

  15. New Methods for Estimating Seasonal Potential Climate Predictability

    NASA Astrophysics Data System (ADS)

    Feng, Xia

    This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that for temperature, is more predictable over the tropical regions, and less predictable in extropics. Bootstrap and ANOCOVA are in good agreement with each other, both methods generating larger predictability than Katz. The seasonal predictability of evaporation over land bears considerably similarity with that of temperature using ANOCOVA, bootstrap, LSG and Madden. The remote SST forcing and soil moisture reveal substantial seasonality in their relations with the potentially predictable seasonal signals. For selected regions, either SST or soil moisture or both shows significant relationships with predictable signals, hence providing indirect insight on slowly varying boundary processes involved to enable useful seasonal climate predication. A multivariate analysis of covariance (MANOCOVA) model is established to identify distinctive predictable patterns, which are uncorrelated with each other. Generally speaking, the seasonal predictability from multivariate model is consistent with that from ANOCOVA. Besides unveiling the spatial variability of predictability, MANOCOVA model also reveals the temporal variability of each predictable pattern, which could be linked to the periodic oscillations.

  16. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  17. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, M.; Bowman, B.; Branson, J.

    The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.

  18. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  19. Development of a coupled hydrological - hydrodynamic model for probabilistic catchment flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Quinn, Niall; Freer, Jim; Coxon, Gemma; Dunne, Toby; Neal, Jeff; Bates, Paul; Sampson, Chris; Smith, Andy; Parkin, Geoff

    2017-04-01

    Computationally efficient flood inundation modelling systems capable of representing important hydrological and hydrodynamic flood generating processes over relatively large regions are vital for those interested in flood preparation, response, and real time forecasting. However, such systems are currently not readily available. This can be particularly important where flood predictions from intense rainfall are considered as the processes leading to flooding often involve localised, non-linear spatially connected hillslope-catchment responses. Therefore, this research introduces a novel hydrological-hydraulic modelling framework for the provision of probabilistic flood inundation predictions across catchment to regional scales that explicitly account for spatial variability in rainfall-runoff and routing processes. Approaches have been developed to automate the provision of required input datasets and estimate essential catchment characteristics from freely available, national datasets. This is an essential component of the framework as when making predictions over multiple catchments or at relatively large scales, and where data is often scarce, obtaining local information and manually incorporating it into the model quickly becomes infeasible. An extreme flooding event in the town of Morpeth, NE England, in 2008 was used as a first case study evaluation of the modelling framework introduced. The results demonstrated a high degree of prediction accuracy when comparing modelled and reconstructed event characteristics for the event, while the efficiency of the modelling approach used enabled the generation of relatively large ensembles of realisations from which uncertainty within the prediction may be represented. This research supports previous literature highlighting the importance of probabilistic forecasting, particularly during extreme events, which can be often be poorly characterised or even missed by deterministic predictions due to the inherent uncertainty in any model application. Future research will aim to further evaluate the robustness of the approaches introduced by applying the modelling framework to a variety of historical flood events across UK catchments. Furthermore, the flexibility and efficiency of the framework is ideally suited to the examination of the propagation of errors through the model which will help gain a better understanding of the dominant sources of uncertainty currently impacting flood inundation predictions.

  20. Experimental Definition and Validation of Protein Coding Transcripts in Chlamydomonas reinhardtii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kourosh Salehi-Ashtiani; Jason A. Papin

    Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnectedmore » metabolic networks to optimize production of the compounds of interest. Using Chlamydomonas reinhardtii as a model, we developed a systems-level methodology bridging metabolic network reconstruction with annotation and experimental verification of enzyme encoding open reading frames. We reconstructed a genome-scale metabolic network for this alga and devised a novel light-modeling approach that enables quantitative growth prediction for a given light source, resolving wavelength and photon flux. We experimentally verified transcripts accounted for in the network and physiologically validated model function through simulation and generation of new experimental growth data, providing high confidence in network contents and predictive applications. The network offers insight into algal metabolism and potential for genetic engineering and efficient light source design, a pioneering resource for studying light-driven metabolism and quantitative systems biology. Our approach to generate a predictive metabolic model integrated with cloned open reading frames, provides a cost-effective platform to generate metabolic engineering resources. While the generated resources are specific to algal systems, the approach that we have developed is not specific to algae and can be readily expanded to other microbial systems as well as higher plants and animals.« less

  1. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  2. Validation of Kinetic-Turbulent-Neoclassical Theory for Edge Intrinsic Rotation in DIII-D Plasmas

    NASA Astrophysics Data System (ADS)

    Ashourvan, Arash

    2017-10-01

    Recent experiments on DIII-D with low-torque neutral beam injection (NBI) have provided a validation of a new model of momentum generation in a wide range of conditions spanning L- and H-mode with direct ion and electron heating. A challenge in predicting the bulk rotation profile for ITER has been to capture the physics of momentum transport near the separatrix and steep gradient region. A recent theory has presented a model for edge momentum transport which predicts the value and direction of the main-ion intrinsic velocity at the pedestal-top, generated by the passing orbits in the inhomogeneous turbulent field. In this study, this model-predicted velocity is tested on DIII-D for a database of 44 low-torque NBI discharges comprised of bothL- and H-mode plasmas. For moderate NBI powers (PNBI<4 MW), model prediction agrees well with the experiments for both L- and H-mode. At higher NBI power the experimental rotation is observed to saturate and even degrade compared to theory. TRANSP-NUBEAM simulations performed for the database show that for discharges with nominally balanced - but high powered - NBI, the net injected torque through the edge can exceed 1 N.m in the counter-current direction. The theory model has been extended to compute the rotation degradation from this counter-current NBI torque by solving a reduced momentum evolution equation for the edge and found the revised velocity prediction to be in agreement with experiment. Projecting to the ITER baseline scenario, this model predicts a value for the pedestal-top rotation (ρ 0.9) comparable to 4 kRad/s. Using the theory modeled - and now tested - velocity to predict the bulk plasma rotation opens up a path to more confidently projecting the confinement and stability in ITER. Supported by the US DOE under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  3. Large Eddy Simulation of Entropy Generation in a Turbulent Mixing Layer

    NASA Astrophysics Data System (ADS)

    Sheikhi, Reza H.; Safari, Mehdi; Hadi, Fatemeh

    2013-11-01

    Entropy transport equation is considered in large eddy simulation (LES) of turbulent flows. The irreversible entropy generation in this equation provides a more general description of subgrid scale (SGS) dissipation due to heat conduction, mass diffusion and viscosity effects. A new methodology is developed, termed the entropy filtered density function (En-FDF), to account for all individual entropy generation effects in turbulent flows. The En-FDF represents the joint probability density function of entropy, frequency, velocity and scalar fields within the SGS. An exact transport equation is developed for the En-FDF, which is modeled by a system of stochastic differential equations, incorporating the second law of thermodynamics. The modeled En-FDF transport equation is solved by a Lagrangian Monte Carlo method. The methodology is employed to simulate a turbulent mixing layer involving transport of passive scalars and entropy. Various modes of entropy generation are obtained from the En-FDF and analyzed. Predictions are assessed against data generated by direct numerical simulation (DNS). The En-FDF predictions are in good agreements with the DNS data.

  4. Generative Recurrent Networks for De Novo Drug Design.

    PubMed

    Gupta, Anvita; Müller, Alex T; Huisman, Berend J H; Fuchs, Jens A; Schneider, Petra; Schneider, Gisbert

    2018-01-01

    Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. We present a method for molecular de novo design that utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells. This computational model captured the syntax of molecular representation in terms of SMILES strings with close to perfect accuracy. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept eliminates the need for virtual compound library enumeration. By employing transfer learning, we fine-tuned the RNN's predictions for specific molecular targets. This approach enables virtual compound design without requiring secondary or external activity prediction, which could introduce error or unwanted bias. The results obtained advocate this generative RNN-LSTM system for high-impact use cases, such as low-data drug discovery, fragment based molecular design, and hit-to-lead optimization for diverse drug targets. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  5. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.

  6. Prediction of new Quarks, Generations and Quark Masses

    NASA Astrophysics Data System (ADS)

    Lach, Thedore

    2002-04-01

    The Standard model currently suggests no relationship between the quark and lepton masses. The CBM (model) of the nucleus has resulted in the prediction of two new quarks, an up quark mass of 237.31 MeV/c2 and a dn quark mass of 42.392 MeV/c2. These two new quarks help explain the numerical relationship between all the quark and lepton masses in a single function. The mass of each SNU-P (quark or lepton) is just the geometric mean of two related SNU-Ps, either in the same generation or in the same family. This numerology predicts the following masses for the electron family: 0.511000 (electron), 7.743828 (predicted), 117.3520, 1778.38, 26950.08 MeV. The resulting slope of these masses when plotted on semi log paper is "e" to 5 significant figures using the currently accepted mass for Tau. This theory suggests that all the "dn like" quarks have a mass of just 10X multiples of 4.24 MeV (the mass of the "d" quark). The first 3 "up like" quark masses are 38, 237 and 1500 MeV. This theory also predicts a new heavy generation with a lepton mass of 27 GeV, a "dn like" quark of 42.4 GeV, and an "up like" quark of 65 GeV. Significant evidence already exists for the existence of these quarks, and lepton.

  7. Coupling continuous damage and debris fragmentation for energy absorption prediction by cfrp structures during crushing

    NASA Astrophysics Data System (ADS)

    Espinosa, Christine; Lachaud, Frédéric; Limido, Jérome; Lacome, Jean-Luc; Bisson, Antoine; Charlotte, Miguel

    2015-05-01

    Energy absorption during crushing is evaluated using a thermodynamic based continuum damage model inspired from the Matzenmiller-Lubliner-Taylors model. It was found that for crash-worthiness applications, it is necessary to couple the progressive ruin of the material to a representation of the matter openings and debris generation. Element kill technique (erosion) and/or cohesive elements are efficient but not predictive. A technique switching finite elements into discrete particles at rupture is used to create debris and accumulated mater during the crushing of the structure. Switching criteria are evaluated using the contribution of the different ruin modes in the damage evolution, energy absorption, and reaction force generation.

  8. Modelling Aerodynamically Generated Sound: Recent Advances in Rotor Noise Prediction

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    2000-01-01

    A great deal of progress has been made in the modeling of aerodynamically generated sound for rotors over the past decade. The Ffowcs Williams-Hawkings (FW-H ) equation has been the foundation for much of the development. Both subsonic and supersonic quadrupole noise formulations have been developed for the prediction of high-speed impulsive noise. In an effort to eliminate the need to compute the quadrupole contribution, the FW-H has also been utilized on permeable surfaces surrounding all physical noise sources. Comparison of the Kirchhoff formulation for moving surfaces with the FW-H equation have shown that the Kirchhoff formulation for moving surfaces can give erroneous results for aeroacoustic problems.

  9. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  10. Critical analysis of 3-D organoid in vitro cell culture models for high-throughput drug candidate toxicity assessments.

    PubMed

    Astashkina, Anna; Grainger, David W

    2014-04-01

    Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Mesoscale atmospheric modeling for emergency response

    NASA Astrophysics Data System (ADS)

    Osteen, B. L.; Fast, J. D.

    Atmospheric transport models for emergency response have traditionally utilized meteorological fields interpolated from sparse data to predict contaminant transport. Often these fields are adjusted to satisfy constraints derived from the governing equations of geophysical fluid dynamics, e.g. mass continuity. Gaussian concentration distributions or stochastic models are then used to represent turbulent diffusion of a contaminant in the diagnosed meteorological fields. The popularity of these models derives from their relative simplicity, ability to make reasonable short-term predictions, and, most important, execution speed. The ability to generate a transport prediction for an accidental release from the Savannah River Site in a time frame which will allow protective action to be taken is essential in an emergency response operation.

  12. Waste generated in high-rise buildings construction: a quantification model based on statistical multiple regression.

    PubMed

    Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana

    2015-05-01

    Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    PubMed

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Use of the HR index to predict maximal oxygen uptake during different exercise protocols.

    PubMed

    Haller, Jeannie M; Fehling, Patricia C; Barr, David A; Storer, Thomas W; Cooper, Christopher B; Smith, Denise L

    2013-10-01

    This study examined the ability of the HRindex model to accurately predict maximal oxygen uptake ([Formula: see text]O2max) across a variety of incremental exercise protocols. Ten men completed five incremental protocols to volitional exhaustion. Protocols included three treadmill (Bruce, UCLA running, Wellness Fitness Initiative [WFI]), one cycle, and one field (shuttle) test. The HRindex prediction equation (METs = 6 × HRindex - 5, where HRindex = HRmax/HRrest) was used to generate estimates of energy expenditure, which were converted to body mass-specific estimates of [Formula: see text]O2max. Estimated [Formula: see text]O2max was compared with measured [Formula: see text]O2max. Across all protocols, the HRindex model significantly underestimated [Formula: see text]O2max by 5.1 mL·kg(-1)·min(-1) (95% CI: -7.4, -2.7) and the standard error of the estimate (SEE) was 6.7 mL·kg(-1)·min(-1). Accuracy of the model was protocol-dependent, with [Formula: see text]O2max significantly underestimated for the Bruce and WFI protocols but not the UCLA, Cycle, or Shuttle protocols. Although no significant differences in [Formula: see text]O2max estimates were identified for these three protocols, predictive accuracy among them was not high, with root mean squared errors and SEEs ranging from 7.6 to 10.3 mL·kg(-1)·min(-1) and from 4.5 to 8.0 mL·kg(-1)·min(-1), respectively. Correlations between measured and predicted [Formula: see text]O2max were between 0.27 and 0.53. Individual prediction errors indicated that prediction accuracy varied considerably within protocols and among participants. In conclusion, across various protocols the HRindex model significantly underestimated [Formula: see text]O2max in a group of aerobically fit young men. Estimates generated using the model did not differ from measured [Formula: see text]O2max for three of the five protocols studied; nevertheless, some individual prediction errors were large. The lack of precision among estimates may limit the utility of the HRindex model; however, further investigation to establish the model's predictive accuracy is warranted.

  15. Using automated texture features to determine the probability for masking of a tumor on mammography, but not ultrasound.

    PubMed

    Häberle, Lothar; Hack, Carolin C; Heusinger, Katharina; Wagner, Florian; Jud, Sebastian M; Uder, Michael; Beckmann, Matthias W; Schulz-Wendtland, Rüdiger; Wittenberg, Thomas; Fasching, Peter A

    2017-08-30

    Tumors in radiologically dense breast were overlooked on mammograms more often than tumors in low-density breasts. A fast reproducible and automated method of assessing percentage mammographic density (PMD) would be desirable to support decisions whether ultrasonography should be provided for women in addition to mammography in diagnostic mammography units. PMD assessment has still not been included in clinical routine work, as there are issues of interobserver variability and the procedure is quite time consuming. This study investigated whether fully automatically generated texture features of mammograms can replace time-consuming semi-automatic PMD assessment to predict a patient's risk of having an invasive breast tumor that is visible on ultrasound but masked on mammography (mammography failure). This observational study included 1334 women with invasive breast cancer treated at a hospital-based diagnostic mammography unit. Ultrasound was available for the entire cohort as part of routine diagnosis. Computer-based threshold PMD assessments ("observed PMD") were carried out and 363 texture features were obtained from each mammogram. Several variable selection and regression techniques (univariate selection, lasso, boosting, random forest) were applied to predict PMD from the texture features. The predicted PMD values were each used as new predictor for masking in logistic regression models together with clinical predictors. These four logistic regression models with predicted PMD were compared among themselves and with a logistic regression model with observed PMD. The most accurate masking prediction was determined by cross-validation. About 120 of the 363 texture features were selected for predicting PMD. Density predictions with boosting were the best substitute for observed PMD to predict masking. Overall, the corresponding logistic regression model performed better (cross-validated AUC, 0.747) than one without mammographic density (0.734), but less well than the one with the observed PMD (0.753). However, in patients with an assigned mammography failure risk >10%, covering about half of all masked tumors, the boosting-based model performed at least as accurately as the original PMD model. Automatically generated texture features can replace semi-automatically determined PMD in a prediction model for mammography failure, such that more than 50% of masked tumors could be discovered.

  16. A Statistical Approach to Thermal Management of Data Centers Under Steady State and System Perturbations

    PubMed Central

    Haaland, Ben; Min, Wanli; Qian, Peter Z. G.; Amemiya, Yasuo

    2011-01-01

    Temperature control for a large data center is both important and expensive. On the one hand, many of the components produce a great deal of heat, and on the other hand, many of the components require temperatures below a fairly low threshold for reliable operation. A statistical framework is proposed within which the behavior of a large cooling system can be modeled and forecast under both steady state and perturbations. This framework is based upon an extension of multivariate Gaussian autoregressive hidden Markov models (HMMs). The estimated parameters of the fitted model provide useful summaries of the overall behavior of and relationships within the cooling system. Predictions under system perturbations are useful for assessing potential changes and improvements to be made to the system. Many data centers have far more cooling capacity than necessary under sensible circumstances, thus resulting in energy inefficiencies. Using this model, predictions for system behavior after a particular component of the cooling system is shut down or reduced in cooling power can be generated. Steady-state predictions are also useful for facility monitors. System traces outside control boundaries flag a change in behavior to examine. The proposed model is fit to data from a group of air conditioners within an enterprise data center from the IT industry. The fitted model is examined, and a particular unit is found to be underutilized. Predictions generated for the system under the removal of that unit appear very reasonable. Steady-state system behavior also is predicted well. PMID:22076026

  17. Generation separation in simple structured life cycles: models and 48 years of field data on a tea tortrix moth.

    PubMed

    Yamanaka, Takehiko; Nelson, William A; Uchimura, Koichiro; Bjørnstad, Ottar N

    2012-01-01

    Population cycles have fascinated ecologists since the early nineteenth century, and the dynamics of insect populations have been central to understanding the intrinsic and extrinsic biological processes responsible for these cycles. We analyzed an extraordinary long-term data set (every 5 days for 48 years) of a tea tortrix moth (Adoxophyes honmai) that exhibits two dominant cycles: an annual cycle with a conspicuous pattern of four or five single-generation cycles superimposed on it. General theory offers several candidate mechanisms for generation cycles. To evaluate these, we construct and parameterize a series of temperature-dependent, stage-structured models that include intraspecific competition, parasitism, mate-finding Allee effects, and adult senescence, all in the context of a seasonal environment. By comparing the observed dynamics with predictions from the models, we find that even weak larval competition in the presence of seasonal temperature forcing predicts the two cycles accurately. None of the other mechanisms predicts the dynamics. Detailed dissection of the results shows that a short reproductive life span and differential winter mortality among stages are the additional life-cycle characteristics that permit the sustained cycles. Our general modeling approach is applicable to a wide range of organisms with temperature-dependent life histories and is likely to prove particularly useful in temperate systems where insect pest outbreaks are both density and temperature dependent. © 2011 by The University of Chicago.

  18. Measurement and prediction of broadband noise from large horizontal axis wind turbine generators

    NASA Technical Reports Server (NTRS)

    Grosveld, F. W.; Shepherd, K. P.; Hubbard, H. H.

    1995-01-01

    A method is presented for predicting the broadband noise spectra of large wind turbine generators. It includes contributions from such noise sources as the inflow turbulence to the rotor, the interactions between the turbulent boundary layers on the blade surfaces with their trailing edges and the wake due to a blunt trailing edge. The method is partly empirical and is based on acoustic measurements of large wind turbines and airfoil models. Spectra are predicted for several large machines including the proposed MOD-5B. Measured data are presented for the MOD-2, the WTS-4, the MOD-OA, and the U.S. Windpower Inc. machines. Good agreement is shown between the predicted and measured far field noise spectra.

  19. Steady and Transient Performance Prediction of Gas Turbine Engines Held in Cambridge, Massachusetts on 27-28 May 1992; in Neubiberg, Germany on 9-10 June 1992; and in Chatillon/Bagneux, France on 11-12 June 1992 (Prediction des Performances des Moteurs a Turbine a Gaz en Regimes Etabli et Transitoire)

    DTIC Science & Technology

    1992-05-01

    the basis of gas generator speed implies both reduction in centrifugal stress and turbine inlet temperature . Calculations yield the values of all...and Transient Performance Calculation Method for Prediction, Analysis 3 and Identification by J.-P. Duponchel, J.I oisy and R.Carrillo Component...thrust changes without over- temperature or flame out. Comprehensive mathematical models of the complete power plant (intake-gas generator -exhaust) plus

  20. The Use of a Block Diagram Simulation Language for Rapid Model Prototyping

    NASA Technical Reports Server (NTRS)

    Whitlow, Johnathan E.; Engrand, Peter

    1996-01-01

    The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.

  1. Thermal barrier coating life prediction model development, phase 2

    NASA Technical Reports Server (NTRS)

    Meier, Susan Manning; Sheffler, Keith D.; Nissley, David M.

    1991-01-01

    The objective of this program was to generate a life prediction model for electron-beam-physical vapor deposited (EB-PVD) zirconia thermal barrier coating (TBC) on gas turbine engine components. Specific activities involved in development of the EB-PVD life prediction model included measurement of EB-PVD ceramic physical and mechanical properties and adherence strength, measurement of the thermally grown oxide (TGO) growth kinetics, generation of quantitative cyclic thermal spallation life data, and development of a spallation life prediction model. Life data useful for model development was obtained by exposing instrumented, EB-PVD ceramic coated cylindrical specimens in a jet fueled burner rig. Monotonic compression and tensile mechanical tests and physical property tests were conducted to obtain the EB-PVD ceramic behavior required for burner rig specimen analysis. As part of that effort, a nonlinear constitutive model was developed for the EB-PVD ceramic. Spallation failure of the EB-PVD TBC system consistently occurred at the TGO-metal interface. Calculated out-of-plane stresses were a small fraction of that required to statically fail the TGO. Thus, EB-PVD spallation was attributed to the interfacial cracking caused by in-plane TGO strains. Since TGO mechanical properties were not measured in this program, calculation of the burner rig specimen TGO in-plane strains was performed by using alumina properties. A life model based on maximum in-plane TGO tensile mechanical strain and TGO thickness correlated the burner rig specimen EB-PVD ceramic spallation lives within a factor of about plus or minus 2X.

  2. Predictive Modeling of Estrogen Receptor Binding Agents Using Advanced Cheminformatics Tools and Massive Public Data.

    PubMed

    Ribay, Kathryn; Kim, Marlene T; Wang, Wenyi; Pinolini, Daniel; Zhu, Hao

    2016-03-01

    Estrogen receptors (ERα) are a critical target for drug design as well as a potential source of toxicity when activated unintentionally. Thus, evaluating potential ERα binding agents is critical in both drug discovery and chemical toxicity areas. Using computational tools, e.g., Quantitative Structure-Activity Relationship (QSAR) models, can predict potential ERα binding agents before chemical synthesis. The purpose of this project was to develop enhanced predictive models of ERα binding agents by utilizing advanced cheminformatics tools that can integrate publicly available bioassay data. The initial ERα binding agent data set, consisting of 446 binders and 8307 non-binders, was obtained from the Tox21 Challenge project organized by the NIH Chemical Genomics Center (NCGC). After removing the duplicates and inorganic compounds, this data set was used to create a training set (259 binders and 259 non-binders). This training set was used to develop QSAR models using chemical descriptors. The resulting models were then used to predict the binding activity of 264 external compounds, which were available to us after the models were developed. The cross-validation results of training set [Correct Classification Rate (CCR) = 0.72] were much higher than the external predictivity of the unknown compounds (CCR = 0.59). To improve the conventional QSAR models, all compounds in the training set were used to search PubChem and generate a profile of their biological responses across thousands of bioassays. The most important bioassays were prioritized to generate a similarity index that was used to calculate the biosimilarity score between each two compounds. The nearest neighbors for each compound within the set were then identified and its ERα binding potential was predicted by its nearest neighbors in the training set. The hybrid model performance (CCR = 0.94 for cross validation; CCR = 0.68 for external prediction) showed significant improvement over the original QSAR models, particularly for the activity cliffs that induce prediction errors. The results of this study indicate that the response profile of chemicals from public data provides useful information for modeling and evaluation purposes. The public big data resources should be considered along with chemical structure information when predicting new compounds, such as unknown ERα binding agents.

  3. A Channel Network Evolution Model with Subsurface Saturation Mechanism and Analysis of the Chaotic Behavior of the Model

    DTIC Science & Technology

    1990-09-01

    between basin shapes and hydrologic responses is fundamental for the purpose of hydrologic predictions , especially in ungaged basins. Another goal is...47] studied this model and showed analitically how very small differences in the c field generated completely different leaf vein network structures... predictability impossible. Complexity is by no means a requirement in order for a system to exhibit SIC. A system as simple as the logistic equation x,,,,=ax,,(l

  4. Projected Applications of a "Weather in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Molthan, Andrew; Zavodsky, Bradley T.; Case, Jonathan L.; LaFontaine, Frank J.; Srikishen, Jayanthi

    2010-01-01

    The NASA Short-term Prediction Research and Transition Center (SPoRT)'s new "Weather in a Box" resources will provide weather research and forecast modeling capabilities for real-time application. Model output will provide additional forecast guidance and research into the impacts of new NASA satellite data sets and software capabilities. By combining several research tools and satellite products, SPoRT can generate model guidance that is strongly influenced by unique NASA contributions.

  5. Generation of a Combined Dataset of Simulated Radar and Electro-Optical Imagery

    DTIC Science & Technology

    2005-10-05

    directional reflectance distribution function (BRDF) predictions and the geometry of a line scanner. Using programs such as MODTRAN and FASCODE, images can be...DIRSIG tries to accurately model scenes through various approaches that model real- world occurrences. MODTRAN is an atmospheric radiative transfer code...used to predict path transmissions and radiances within the atmosphere (DIRSIG Manual, 2004). FASCODE is similar to MODTRAN , however it works as a

  6. Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach

    DTIC Science & Technology

    2013-09-30

    statistically extratropical storms and extremes, and link these to LFV modes. Mingfang Ting, Yochanan Kushnir, Andrew W. Robertson, Lei Wang...forecast models, as well as in the understanding they have generated. Adam Sobel, Daehyun Kim and Shuguang Wang. Extratropical variability and...predictability. Determine the extent to which extratropical monthly and seasonal low-frequency variability (LFV, i.e. PNA, NAO, as well as other regional

  7. util_2comp: Planck-based two-component dust model utilities

    NASA Astrophysics Data System (ADS)

    Meisner, Aaron

    2014-11-01

    The util_2comp software utilities generate predictions of far-infrared Galactic dust emission and reddening based on a two-component dust emission model fit to Planck HFI, DIRBE and IRAS data from 100 GHz to 3000 GHz. These predictions and the associated dust temperature map have angular resolution of 6.1 arcminutes and are available over the entire sky. Implementations in IDL and Python are included.

  8. A model for phase noise generation in amplifiers.

    PubMed

    Tomlin, T D; Fynn, K; Cantoni, A

    2001-11-01

    In this paper, a model is presented for predicting the phase modulation (PM) and amplitude modulation (AM) noise in bipolar junction transistor (BJT) amplifiers. The model correctly predicts the dependence of phase noise on the signal frequency (at a particular carrier offset frequency), explains the noise shaping of the phase noise about the signal frequency, and shows the functional dependence on the transistor parameters and the circuit parameters. Experimental studies on common emitter (CE) amplifiers have been used to validate the PM noise model at carrier frequencies between 10 and 100 MHz.

  9. Physics beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Lach, Theodore

    2011-04-01

    Recent discoveries of the excited states of the Bs** meson along with the discovery of the omega-b-minus have brought into popular acceptance the concept of the orbiting quarks predicted by the Checker Board Model (CBM) 14 years ago. Back then the concept of orbiting quarks was not fashionable. Recent estimates of velocities of these quarks inside the proton and neutron are in excess of 90% the speed of light also in agreement with the CBM model. Still a 2D structure of the nucleus has not been accepted nor has it been proven wrong. The CBM predicts masses of the up and dn quarks are 237.31 MeV and 42.392 MeV respectively and suggests that a lighter generation of quarks u and d make up a different generation of quarks that make up light mesons. The CBM also predicts that the T' and B' quarks do exist and are not as massive as might be expected. (this would make it a 5G world in conflict with the SM) The details of the CB model and prediction of quark masses can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/.

  10. Determination of quantitative retention-activity relationships between pharmacokinetic parameters and biological effectiveness fingerprints of Salvia miltiorrhiza constituents using biopartitioning and microemulsion high-performance liquid chromatography.

    PubMed

    Gao, Haoshi; Huang, Hongzhang; Zheng, Aini; Yu, Nuojun; Li, Ning

    2017-11-01

    In this study, we analyzed danshen (Salvia miltiorrhiza) constituents using biopartitioning and microemulsion high-performance liquid chromatography (MELC). The quantitative retention-activity relationships (QRARs) of the constituents were established to model their pharmacokinetic (PK) parameters and chromatographic retention data, and generate their biological effectiveness fingerprints. A high-performance liquid chromatography (HPLC) method was established to determine the abundance of the extracted danshen constituents, such as sodium danshensu, rosmarinic acid, salvianolic acid B, protocatechuic aldehyde, cryptotanshinone, and tanshinone IIA. And another HPLC protocol was established to determine the abundance of those constituents in rat plasma samples. An experimental model was built in Sprague Dawley (SD) rats, and calculated the corresponding PK parameterst with 3P97 software package. Thirty-five model drugs were selected to test the PK parameter prediction capacities of the various MELC systems and to optimize the chromatographic protocols. QRARs and generated PK fingerprints were established. The test included water/oil-soluble danshen constituents and the prediction capacity of the regression model was validated. The results showed that the model had good predictability. Copyright © 2017. Published by Elsevier B.V.

  11. Mechanical model of orthopaedic drilling for augmented-haptics-based training.

    PubMed

    Pourkand, Ashkan; Zamani, Naghmeh; Grow, David

    2017-10-01

    In this study, augmented-haptic feedback is used to combine a physical object with virtual elements in order to simulate anatomic variability in bone. This requires generating levels of force/torque consistent with clinical bone drilling, which exceed the capabilities of commercially available haptic devices. Accurate total force generation is facilitated by a predictive model of axial force during simulated orthopaedic drilling. This model is informed by kinematic data collected while drilling into synthetic bone samples using an instrumented linkage attached to the orthopaedic drill. Axial force is measured using a force sensor incorporated into the bone fixture. A nonlinear function, relating force to axial position and velocity, was used to fit the data. The normalized root-mean-square error (RMSE) of forces predicted by the model compared to those measured experimentally was 0.11 N across various bones with significant differences in geometry and density. This suggests that a predictive model can be used to capture relevant variations in the thickness and hardness of cortical and cancellous bone. The practical performance of this approach is measured using the Phantom Premium haptic device, with some required customizations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Ensuring long-term utility of the AOP framework and knowledge for multiple stakeholders

    EPA Science Inventory

    1.Introduction There is a need to increase the development and implementation of predictive approaches to support chemical safety assessment. These predictive approaches feature generation of data from tools such as computational models, pathway-based in vitro assays, and short-t...

  13. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  14. SO(10) supersymmetric grand unified theories

    NASA Astrophysics Data System (ADS)

    Dermisek, Radovan

    The origin of the fermion mass hierarchy is one of the most challenging problems in elementary particle physics. In the standard model fermion masses and mixing angles are free parameters. Supersymmetric grand unified theories provide a beautiful framework for physics beyond the standard model. In addition to gauge coupling unification these theories provide relations between quark and lepton masses within families, and with additional family symmetry the hierarchy between families can be generated. We present a predictive SO(10) supersymmetric grand unified model with D 3 x U(1) family symmetry. The hierarchy in fermion masses is generated by the family symmetry breaking D 3 x U(1) → ZN → nothing. This model fits the low energy data in the charged fermion sector quite well. We discuss the prediction of this model for the proton lifetime in light of recent SuperKamiokande results and present a clear picture of the allowed spectra of supersymmetric particles. Finally, the detailed discussion of the Yukawa coupling unification of the third generation particles is provided. We find a narrow region is consistent with t, b, tau Yukawa unification for mu > 0 (suggested by b → sgamma and the anomalous magnetic moment of the muon) with A0 ˜ -1.9m16, m10 ˜ 1.4m16, m16 ≳ 1200 GeV and mu, M1/2 ˜ 100--500 GeV. Demanding Yukawa unification thus makes definite predictions for Higgs and sparticle masses.

  15. Cp Asymmetries in B0DECAYS Beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Dib, Claudio O.; London, David; Nir, Yosef

    Of the many ingredients of the Standard Model that are relevant to the analysis of CP asymmetries in B0 decays, some are likely to hold even beyond the Standard Model while others are sensitive to new physics. Consequently, certain predictions are maintained while others may show dramatic deviations from the Standard Model. Many classes of models may show clear signatures when the asymmetries are measured: four quark generations, Z-mediated flavor-changing neutral currents, supersymmetry and “real superweak” models. On the other hand, models of left-right symmetry and multi-Higgs sectors with natural flavor conservation are unlikely to modify the Standard Model predictions.

  16. Analysis of plasmas generated by fission fragments. [nuclear pumped lasers and helium plasma

    NASA Technical Reports Server (NTRS)

    Deese, J. E.; Hassan, H. A.

    1977-01-01

    A kinetic model is developed for a plasma generated by fission fragments and the results are employed to study helium plasma generated in a tube coated with fissionable material. Because both the heavy particles and electrons play important roles in creating the plasma, their effects are considered simultaneously. The calculations are carried out for a range of neutron fluxes and pressures. In general, the predictions of the theory are in good agreement with available intensity measurements. Moreover, the theory predicts the experimentally measured inversions. However, the calculated gain coefficients are such that lasing is not expected to take place in a helium plasma generated by fission fragments. The effects of an externally applied electric field are also considered.

  17. Integrating urban recharge uncertainty into standard groundwater modeling practice: A case study on water main break predictions for the Barton Springs segment of the Edwards Aquifer, Austin, Texas

    NASA Astrophysics Data System (ADS)

    Sinner, K.; Teasley, R. L.

    2016-12-01

    Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling

  18. Study on cavitation effect of mechanical seals with laser-textured porous surface

    NASA Astrophysics Data System (ADS)

    Liu, T.; Chen, H. l.; Liu, Y. H.; Wang, Q.; Liu, Z. B.; Hou, D. H.

    2012-11-01

    Study on the mechanisms underlying generation of hydrodynamic pressure effect associated with laser-textured porous surface on mechanical seal, is the key to seal and lubricant properties. The theory model of mechanical seals with laser-textured porous surface (LES-MS) based on cavitation model was established. The LST-MS was calculated and analyzed by using Fluent software with full cavitation model and non-cavitation model and film thickness was predicted by the dynamic mesh technique. The results indicate that the effect of hydrodynamic pressure and cavitation are the important reasons to generate liquid film opening force on LST-MS; Cavitation effect can enhance hydrodynamic pressure effect of LST-MS; The thickness of liquid film could be well predicted with the method of dynamic mesh technique on Fluent and it becomes larger as the increasing of shaft speed and the decreasing of pressure.

  19. Bulalo field, Philippines: Reservoir modeling for prediction of limits to sustainable generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strobel, Calvin J.

    1993-01-28

    The Bulalo geothermal field, located in Laguna province, Philippines, supplies 12% of the electricity on the island of Luzon. The first 110 MWe power plant was on line May 1979; current 330 MWe (gross) installed capacity was reached in 1984. Since then, the field has operated at an average plant factor of 76%. The National Power Corporation plans to add 40 MWe base load and 40 MWe standby in 1995. A numerical simulation model for the Bulalo field has been created that matches historic pressure changes, enthalpy and steam flash trends and cumulative steam production. Gravity modeling provided independent verificationmore » of mass balances and time rate of change of liquid desaturation in the rock matrix. Gravity modeling, in conjunction with reservoir simulation provides a means of predicting matrix dry out and the time to limiting conditions for sustainable levelized steam deliverability and power generation.« less

  20. The kinetics of thermal generation of flavour.

    PubMed

    Parker, Jane K

    2013-01-01

    Control and optimisation of flavour is the ultimate challenge for the food and flavour industry. The major route to flavour formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compound. The complexity of the reaction means that researchers turn to kinetic modelling in order to understand the control points of the reaction and to manipulate the flavour profile. Studies of the kinetics of flavour formation have developed over the past 30 years from single- response empirical models of binary aqueous systems to sophisticated multi-response models in food matrices, based on the underlying chemistry, with the power to predict the formation of some key aroma compounds. This paper discusses in detail the development of kinetic models of thermal generation of flavour and looks at the challenges involved in predicting flavour. Copyright © 2012 Society of Chemical Industry.

  1. Development of model for prediction of Leachate Pollution Index (LPI) in absence of leachate parameters.

    PubMed

    Lothe, Anjali G; Sinha, Alok

    2017-05-01

    Leachate pollution index (LPI) is an environmental index which quantifies the pollution potential of leachate generated in landfill site. Calculation of Leachate pollution index (LPI) is based on concentration of 18 parameters present in leachate. However, in case of non-availability of all 18 parameters evaluation of actual values of LPI becomes difficult. In this study, a model has been developed to predict the actual values of LPI in case of partial availability of parameters. This model generates eleven equations that helps in determination of upper and lower limit of LPI. The geometric mean of these two values results in LPI value. Application of this model to three landfill site results in LPI value with an error of ±20% for ∑ i n w i ⩾0.6. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Developing stochastic model of thrust and flight dynamics for small UAVs

    NASA Astrophysics Data System (ADS)

    Tjhai, Chandra

    This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.

  3. A comparison of life prediction methodologies for titanium matrix composites subjected to thermomechanical fatigue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calcaterra, J.R.; Johnson, W.S.; Neu, R.W.

    1997-12-31

    Several methodologies have been developed to predict the lives of titanium matrix composites (TMCs) subjected to thermomechanical fatigue (TMF). This paper reviews and compares five life prediction models developed at NASA-LaRC. Wright Laboratories, based on a dingle parameter, the fiber stress in the load-carrying, or 0{degree}, direction. The two other models, both developed at Wright Labs. are multi-parameter models. These can account for long-term damage, which is beyond the scope of the single-parameter models, but this benefit is offset by the additional complexity of the methodologies. Each of the methodologies was used to model data generated at NASA-LeRC. Wright Labs.more » and Georgia Tech for the SCS-6/Timetal 21-S material system. VISCOPLY, a micromechanical stress analysis code, was used to determine the constituent stress state for each test and was used for each model to maintain consistency. The predictive capabilities of the models are compared, and the ability of each model to accurately predict the responses of tests dominated by differing damage mechanisms is addressed.« less

  4. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  5. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  6. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    PubMed

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  7. Using Neural Networks to Generate Inferential Roles for Natural Language

    PubMed Central

    Blouw, Peter; Eliasmith, Chris

    2018-01-01

    Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition. PMID:29387031

  8. Generation of 3-D hydrostratigraphic zones from dense airborne electromagnetic data to assess groundwater model prediction error

    USGS Publications Warehouse

    Christensen, Nikolaj K; Minsley, Burke J.; Christensen, Steen

    2017-01-01

    We present a new methodology to combine spatially dense high-resolution airborne electromagnetic (AEM) data and sparse borehole information to construct multiple plausible geological structures using a stochastic approach. The method developed allows for quantification of the performance of groundwater models built from different geological realizations of structure. Multiple structural realizations are generated using geostatistical Monte Carlo simulations that treat sparse borehole lithological observations as hard data and dense geophysically derived structural probabilities as soft data. Each structural model is used to define 3-D hydrostratigraphical zones of a groundwater model, and the hydraulic parameter values of the zones are estimated by using nonlinear regression to fit hydrological data (hydraulic head and river discharge measurements). Use of the methodology is demonstrated for a synthetic domain having structures of categorical deposits consisting of sand, silt, or clay. It is shown that using dense AEM data with the methodology can significantly improve the estimated accuracy of the sediment distribution as compared to when borehole data are used alone. It is also shown that this use of AEM data can improve the predictive capability of a calibrated groundwater model that uses the geological structures as zones. However, such structural models will always contain errors because even with dense AEM data it is not possible to perfectly resolve the structures of a groundwater system. It is shown that when using such erroneous structures in a groundwater model, they can lead to biased parameter estimates and biased model predictions, therefore impairing the model's predictive capability.

  9. Generation of 3-D hydrostratigraphic zones from dense airborne electromagnetic data to assess groundwater model prediction error

    NASA Astrophysics Data System (ADS)

    Christensen, N. K.; Minsley, B. J.; Christensen, S.

    2017-02-01

    We present a new methodology to combine spatially dense high-resolution airborne electromagnetic (AEM) data and sparse borehole information to construct multiple plausible geological structures using a stochastic approach. The method developed allows for quantification of the performance of groundwater models built from different geological realizations of structure. Multiple structural realizations are generated using geostatistical Monte Carlo simulations that treat sparse borehole lithological observations as hard data and dense geophysically derived structural probabilities as soft data. Each structural model is used to define 3-D hydrostratigraphical zones of a groundwater model, and the hydraulic parameter values of the zones are estimated by using nonlinear regression to fit hydrological data (hydraulic head and river discharge measurements). Use of the methodology is demonstrated for a synthetic domain having structures of categorical deposits consisting of sand, silt, or clay. It is shown that using dense AEM data with the methodology can significantly improve the estimated accuracy of the sediment distribution as compared to when borehole data are used alone. It is also shown that this use of AEM data can improve the predictive capability of a calibrated groundwater model that uses the geological structures as zones. However, such structural models will always contain errors because even with dense AEM data it is not possible to perfectly resolve the structures of a groundwater system. It is shown that when using such erroneous structures in a groundwater model, they can lead to biased parameter estimates and biased model predictions, therefore impairing the model's predictive capability.

  10. Prediction of total organic carbon content in shale reservoir based on a new integrated hybrid neural network and conventional well logging curves

    NASA Astrophysics Data System (ADS)

    Zhu, Linqi; Zhang, Chong; Zhang, Chaomo; Wei, Yang; Zhou, Xueqing; Cheng, Yuan; Huang, Yuyang; Zhang, Le

    2018-06-01

    There is increasing interest in shale gas reservoirs due to their abundant reserves. As a key evaluation criterion, the total organic carbon content (TOC) of the reservoirs can reflect its hydrocarbon generation potential. The existing TOC calculation model is not very accurate and there is still the possibility for improvement. In this paper, an integrated hybrid neural network (IHNN) model is proposed for predicting the TOC. This is based on the fact that the TOC information on the low TOC reservoir, where the TOC is easy to evaluate, comes from a prediction problem, which is the inherent problem of the existing algorithm. By comparing the prediction models established in 132 rock samples in the shale gas reservoir within the Jiaoshiba area, it can be seen that the accuracy of the proposed IHNN model is much higher than that of the other prediction models. The mean square error of the samples, which were not joined to the established models, was reduced from 0.586 to 0.442. The results show that TOC prediction is easier after logging prediction has been improved. Furthermore, this paper puts forward the next research direction of the prediction model. The IHNN algorithm can help evaluate the TOC of a shale gas reservoir.

  11. DFT and 3D-QSAR Studies of Anti-Cancer Agents m-(4-Morpholinoquinazolin-2-yl) Benzamide Derivatives for Novel Compounds Design

    NASA Astrophysics Data System (ADS)

    Zhao, Siqi; Zhang, Guanglong; Xia, Shuwei; Yu, Liangmin

    2018-06-01

    As a group of diversified frameworks, quinazolin derivatives displayed a broad field of biological functions, especially as anticancer. To investigate the quantitative structure-activity relationship, 3D-QSAR models were generated with 24 quinazolin scaffold molecules. The experimental and predicted pIC50 values for both training and test set compounds showed good correlation, which proved the robustness and reliability of the generated QSAR models. The most effective CoMFA and CoMSIA were obtained with correlation coefficient r 2 ncv of 1.00 (both) and leave-one-out coefficient q 2 of 0.61 and 0.59, respectively. The predictive abilities of CoMFA and CoMSIA were quite good with the predictive correlation coefficients ( r 2 pred ) of 0.97 and 0.91. In addition, the statistic results of CoMFA and CoMSIA were used to design new quinazolin molecules.

  12. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  13. Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diakov, Victor; Cole, Wesley; Sullivan, Patrick

    2015-11-01

    Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validitymore » of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.« less

  14. Seasonal drought ensemble predictions based on multiple climate models in the upper Han River Basin, China

    NASA Astrophysics Data System (ADS)

    Ma, Feng; Ye, Aizhong; Duan, Qingyun

    2017-03-01

    An experimental seasonal drought forecasting system is developed based on 29-year (1982-2010) seasonal meteorological hindcasts generated by the climate models from the North American Multi-Model Ensemble (NMME) project. This system made use of a bias correction and spatial downscaling method, and a distributed time-variant gain model (DTVGM) hydrologic model. DTVGM was calibrated using observed daily hydrological data and its streamflow simulations achieved Nash-Sutcliffe efficiency values of 0.727 and 0.724 during calibration (1978-1995) and validation (1996-2005) periods, respectively, at the Danjiangkou reservoir station. The experimental seasonal drought forecasting system (known as NMME-DTVGM) is used to generate seasonal drought forecasts. The forecasts were evaluated against the reference forecasts (i.e., persistence forecast and climatological forecast). The NMME-DTVGM drought forecasts have higher detectability and accuracy and lower false alarm rate than the reference forecasts at different lead times (from 1 to 4 months) during the cold-dry season. No apparent advantage is shown in drought predictions during spring and summer seasons because of a long memory of the initial conditions in spring and a lower predictive skill for precipitation in summer. Overall, the NMME-based seasonal drought forecasting system has meaningful skill in predicting drought several months in advance, which can provide critical information for drought preparedness and response planning as well as the sustainable practice of water resource conservation over the basin.

  15. Predicting the potential distribution of the amphibian pathogen Batrachochytrium dendrobatidis in East and Southeast Asia.

    PubMed

    Moriguchi, Sachiko; Tominaga, Atsushi; Irwin, Kelly J; Freake, Michael J; Suzuki, Kazutaka; Goka, Koichi

    2015-04-08

    Batrachochytrium dendrobatidis (Bd) is the pathogen responsible for chytridiomycosis, a disease that is associated with a worldwide amphibian population decline. In this study, we predicted the potential distribution of Bd in East and Southeast Asia based on limited occurrence data. Our goal was to design an effective survey area where efforts to detect the pathogen can be focused. We generated ecological niche models using the maximum-entropy approach, with alleviation of multicollinearity and spatial autocorrelation. We applied eigenvector-based spatial filters as independent variables, in addition to environmental variables, to resolve spatial autocorrelation, and compared the model's accuracy and the degree of spatial autocorrelation with those of a model estimated using only environmental variables. We were able to identify areas of high suitability for Bd with accuracy. Among the environmental variables, factors related to temperature and precipitation were more effective in predicting the potential distribution of Bd than factors related to land use and cover type. Our study successfully predicted the potential distribution of Bd in East and Southeast Asia. This information should now be used to prioritize survey areas and generate a surveillance program to detect the pathogen.

  16. Evaluating alternative gait strategies using evolutionary robotics.

    PubMed

    Sellers, William I; Dennis, Louise A; W -J, Wang; Crompton, Robin H

    2004-05-01

    Evolutionary robotics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robot is predefined and various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requirements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance parameters such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictions for modern humans are highly accurate in terms of energy cost for a given speed and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human-like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids.

  17. Indirect field technology for detecting areas object of illegal spills harmful to human health: application of drones, photogrammetry and hydrological models.

    PubMed

    Capolupo, Alessandra; Pindozzi, Stefania; Okello, Collins; Boccia, Lorenzo

    2014-12-01

    The accumulation of heavy metals in agricultural soils is a serious environmental problem. The Campania region in southern Italy has higher levels of cancer risk, presumably due to the accumulation of geogenic and anthropogenic soil pollutants, some of which have been incorporated into organic matter. The aim of this study was to introduce and test an innovative, field-applicable methodology to detect heavy metal accumulation using drone-based photogrammetry and microrill network modelling, specifically to generate wetlands wetlands prediction indices normally applied at large catchment scales, such as a large geographic basin. The processing of aerial photos taken using a hexacopter equipped with fifth-generation software for photogrammetry allowed the generation of a digital elevation model (DEM) with a resolution as high as 30 mm. Not only this provided a high potential for the study of micro-rill processes, but it was also useful for testing and comparing the capability of the topographic index (TI) and the clima-topographic index (CTI) to predict heavy metal sedimentation points at scales from 0.1 to 10 ha. Our results indicate that the TI and CTI indices can be used to predict points of heavy metal accumulation for small field catchments.

  18. Evaluating alternative gait strategies using evolutionary robotics

    PubMed Central

    Sellers, William I; Dennis, Louise A; Wang, W -J; Crompton, Robin H

    2004-01-01

    Evolutionary robotics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robot is predefined and various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requirements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance parameters such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictions for modern humans are highly accurate in terms of energy cost for a given speed and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human-like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids. PMID:15198699

  19. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  20. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes

    PubMed Central

    Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-01-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed‐batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647–1661, 2017 PMID:28786215

  1. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  2. Study of cavitating inducer instabilities

    NASA Technical Reports Server (NTRS)

    Young, W. E.; Murphy, R.; Reddecliff, J. M.

    1972-01-01

    An analytic and experimental investigation into the causes and mechanisms of cavitating inducer instabilities was conducted. Hydrofoil cascade tests were performed, during which cavity sizes were measured. The measured data were used, along with inducer data and potential flow predictions, to refine an analysis for the prediction of inducer blade suction surface cavitation cavity volume. Cavity volume predictions were incorporated into a linearized system model, and instability predictions for an inducer water test loop were generated. Inducer tests were conducted and instability predictions correlated favorably with measured instability data.

  3. Development of a Low-Reynolds Number, Nonlinear kappa-epsilon Model for the Reduced Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Boger, David A.; Govindan, T. R.; McDonald, Henry

    1997-01-01

    Previous work at NASA LeRC has shown that flow distortions in aircraft engine inlet ducts can be significantly reduced by mounting vortex generators, or small wing sections, on the inside surface of the engine inlet. The placement of the vortex generators is an important factor in obtaining the optimal effect over a wide operating envelope. In this regard, the only alternative to a long and expensive test program which would search out this optimal configuration is a good prediction procedure which could narrow the field of search. Such a procedure has been developed in collaboration with NASA LeRC, and results obtained by NASA personnel indicate that it shows considerable promise for predicting the viscous turbulent flow in engine inlet ducts in the presence of vortex generators. The prediction tool is a computer code which numerically solves the reduced Navier-Stokes equations and so is commonly referred to as RNS3D. Obvious deficiencies in RNS3D have been addressed in previous work. Primarily, it is known that the predictions of the mean velocity field of a turbulent boundary layer flow approaching separation are not in good agreement with data. It was suggested that the use of an algebraic mixing-length turbulence model in RNS3D is at least partly to blame for this. Additionally, the current turbulence model includes an assumption of isotropy which will ultimately fail to capture turbulence-driven secondary flow known to exist in noncircular ducts.

  4. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  5. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  6. Rapid prediction of chemical metabolism by human UDP-glucuronosyltransferase isoforms using quantum chemical descriptors derived with the electronegativity equalization method.

    PubMed

    Sorich, Michael J; McKinnon, Ross A; Miners, John O; Winkler, David A; Smith, Paul A

    2004-10-07

    This study aimed to evaluate in silico models based on quantum chemical (QC) descriptors derived using the electronegativity equalization method (EEM) and to assess the use of QC properties to predict chemical metabolism by human UDP-glucuronosyltransferase (UGT) isoforms. Various EEM-derived QC molecular descriptors were calculated for known UGT substrates and nonsubstrates. Classification models were developed using support vector machine and partial least squares discriminant analysis. In general, the most predictive models were generated with the support vector machine. Combining QC and 2D descriptors (from previous work) using a consensus approach resulted in a statistically significant improvement in predictivity (to 84%) over both the QC and 2D models and the other methods of combining the descriptors. EEM-derived QC descriptors were shown to be both highly predictive and computationally efficient. It is likely that EEM-derived QC properties will be generally useful for predicting ADMET and physicochemical properties during drug discovery.

  7. Validation of neoclassical bootstrap current models in the edge of an H-mode plasma.

    PubMed

    Wade, M R; Murakami, M; Politzer, P A

    2004-06-11

    Analysis of the parallel electric field E(parallel) evolution following an L-H transition in the DIII-D tokamak indicates the generation of a large negative pulse near the edge which propagates inward, indicative of the generation of a noninductive edge current. Modeling indicates that the observed E(parallel) evolution is consistent with a narrow current density peak generated in the plasma edge. Very good quantitative agreement is found between the measured E(parallel) evolution and that expected from neoclassical theory predictions of the bootstrap current.

  8. Spatiotemporal Bayesian networks for malaria prediction.

    PubMed

    Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap

    2018-01-01

    Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  10. Development and validation of a subject-specific finite element model of the functional spinal unit to predict vertebral strength.

    PubMed

    Lee, Chu-Hee; Landham, Priyan R; Eastell, Richard; Adams, Michael A; Dolan, Patricia; Yang, Lang

    2017-09-01

    Finite element models of an isolated vertebral body cannot accurately predict compressive strength of the spinal column because, in life, compressive load is variably distributed across the vertebral body and neural arch. The purpose of this study was to develop and validate a patient-specific finite element model of a functional spinal unit, and then use the model to predict vertebral strength from medical images. A total of 16 cadaveric functional spinal units were scanned and then tested mechanically in bending and compression to generate a vertebral wedge fracture. Before testing, an image processing and finite element analysis framework (SpineVox-Pro), developed previously in MATLAB using ANSYS APDL, was used to generate a subject-specific finite element model with eight-node hexahedral elements. Transversely isotropic linear-elastic material properties were assigned to vertebrae, and simple homogeneous linear-elastic properties were assigned to the intervertebral disc. Forward bending loading conditions were applied to simulate manual handling. Results showed that vertebral strengths measured by experiment were positively correlated with strengths predicted by the functional spinal unit finite element model with von Mises or Drucker-Prager failure criteria ( R 2  = 0.80-0.87), with areal bone mineral density measured by dual-energy X-ray absorptiometry ( R 2  = 0.54) and with volumetric bone mineral density from quantitative computed tomography ( R 2  = 0.79). Large-displacement non-linear analyses on all specimens did not improve predictions. We conclude that subject-specific finite element models of a functional spinal unit have potential to estimate the vertebral strength better than bone mineral density alone.

  11. Fermion masses and mixings and dark matter constraints in a model with radiative seesaw mechanism

    NASA Astrophysics Data System (ADS)

    Bernal, Nicolás; Cárcamo Hernández, A. E.; de Medeiros Varzielas, Ivo; Kovalenko, Sergey

    2018-05-01

    We formulate a predictive model of fermion masses and mixings based on a Δ(27) family symmetry. In the quark sector the model leads to the viable mixing inspired texture where the Cabibbo angle comes from the down quark sector and the other angles come from both up and down quark sectors. In the lepton sector the model generates a predictive structure for charged leptons and, after radiative seesaw, an effective neutrino mass matrix with only one real and one complex parameter. We carry out a detailed analysis of the predictions in the lepton sector, where the model is only viable for inverted neutrino mass hierarchy, predicting a strict correlation between θ 23 and θ 13. We show a benchmark point that leads to the best-fit values of θ 12, θ 13, predicting a specific sin2 θ 23 ≃ 0.51 (within the 3 σ range), a leptonic CP-violating Dirac phase δ ≃ 281.6° and for neutrinoless double-beta decay m ee ≃ 41.3 meV. We turn then to an analysis of the dark matter candidates in the model, which are stabilized by an unbroken ℤ2 symmetry. We discuss the possibility of scalar dark matter, which can generate the observed abundance through the Higgs portal by the standard WIMP mechanism. An interesting possibility arises if the lightest heavy Majorana neutrino is the lightest ℤ2-odd particle. The model can produce a viable fermionic dark matter candidate, but only as a feebly interacting massive particle (FIMP), with the smallness of the coupling to the visible sector protected by a symmetry and directly related to the smallness of the light neutrino masses.

  12. QSAR models for prediction of chromatographic behavior of homologous Fab variants.

    PubMed

    Robinson, Julie R; Karkov, Hanne S; Woo, James A; Krogh, Berit O; Cramer, Steven M

    2017-06-01

    While quantitative structure activity relationship (QSAR) models have been employed successfully for the prediction of small model protein chromatographic behavior, there have been few reports to date on the use of this methodology for larger, more complex proteins. Recently our group generated focused libraries of antibody Fab fragment variants with different combinations of surface hydrophobicities and electrostatic potentials, and demonstrated that the unique selectivities of multimodal resins can be exploited to separate these Fab variants. In this work, results from linear salt gradient experiments with these Fabs were employed to develop QSAR models for six chromatographic systems, including multimodal (Capto MMC, Nuvia cPrime, and two novel ligand prototypes), hydrophobic interaction chromatography (HIC; Capto Phenyl), and cation exchange (CEX; CM Sepharose FF) resins. The models utilized newly developed "local descriptors" to quantify changes around point mutations in the Fab libraries as well as novel cluster descriptors recently introduced by our group. Subsequent rounds of feature selection and linearized machine learning algorithms were used to generate robust, well-validated models with high training set correlations (R 2  > 0.70) that were well suited for predicting elution salt concentrations in the various systems. The developed models then were used to predict the retention of a deamidated Fab and isotype variants, with varying success. The results represent the first successful utilization of QSAR for the prediction of chromatographic behavior of complex proteins such as Fab fragments in multimodal chromatographic systems. The framework presented here can be employed to facilitate process development for the purification of biological products from product-related impurities by in silico screening of resin alternatives. Biotechnol. Bioeng. 2017;114: 1231-1240. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Computation of Sound Generated by Flow Over a Circular Cylinder: An Acoustic Analogy Approach

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Cox, Jared S.; Rumsey, Christopher L.; Younis, Bassam A.

    1997-01-01

    The sound generated by viscous flow past a circular cylinder is predicted via the Lighthill acoustic analogy approach. The two dimensional flow field is predicted using two unsteady Reynolds-averaged Navier-Stokes solvers. Flow field computations are made for laminar flow at three Reynolds numbers (Re = 1000, Re = 10,000, and Re = 90,000) and two different turbulent models at Re = 90,000. The unsteady surface pressures are utilized by an acoustics code that implements Farassat's formulation 1A to predict the acoustic field. The acoustic code is a 3-D code - 2-D results are found by using a long cylinder length. The 2-D predictions overpredict the acoustic amplitude; however, if correlation lengths in the range of 3 to 10 cylinder diameters are used, the predicted acoustic amplitude agrees well with experiment.

  14. Fracture prediction using modified mohr coulomb theory for non-linear strain paths using AA3104-H19

    NASA Astrophysics Data System (ADS)

    Dick, Robert; Yoon, Jeong Whan

    2016-08-01

    Experiment results from uniaxial tensile tests, bi-axial bulge tests, and disk compression tests for a beverage can AA3104-H19 material are presented. The results from the experimental tests are used to determine material coefficients for both Yld2000 and Yld2004 models. Finite element simulations are developed to study the influence of materials model on the predicted earing profile. It is shown that only the YLD2004 model is capable of accurately predicting the earing profile as the YLD2000 model only predicts 4 ears. Excellent agreement with the experimental data for earing is achieved using the AA3104-H19 material data and the Yld2004 constitutive model. Mechanical tests are also conducted on the AA3104-H19 to generate fracture data under different stress triaxiality conditions. Tensile tests are performed on specimens with a central hole and notched specimens. Torsion of a double bridge specimen is conducted to generate points near pure shear conditions. The Nakajima test is utilized to produce points in bi-axial tension. The data from the experiments is used to develop the fracture locus in the principal strain space. Mapping from principal strain space to stress triaxiality space, principal stress space, and polar effective plastic strain space is accomplished using a generalized mapping technique. Finite element modeling is used to validate the Modified Mohr-Coulomb (MMC) fracture model in the polar space. Models of a hole expansion during cup drawing and a cup draw/reverse redraw/expand forming sequence demonstrate the robustness of the modified PEPS fracture theory for the condition with nonlinear forming paths and accurately predicts the onset of failure. The proposed methods can be widely used for predicting failure for the examples which undergo nonlinear strain path including rigid-packaging and automotive forming.

  15. Temperature-dependent phenology of Plutella xylostella (Lepidoptera: Plutellidae): Simulation and visualization of current and future distributions along the Eastern Afromontane.

    PubMed

    Ngowi, Benignus V; Tonnang, Henri E Z; Mwangi, Evans M; Johansson, Tino; Ambale, Janet; Ndegwa, Paul N; Subramanian, Sevgan

    2017-01-01

    There is a scarcity of laboratory and field-based results showing the movement of the diamondback moth (DBM) Plutella xylostella (L.) across a spatial scale. We studied the population growth of the diamondback moth (DBM) Plutella xylostella (L.) under six constant temperatures, to understand and predict population changes along altitudinal gradients and under climate change scenarios. Non-linear functions were fitted to continuously model DBM development, mortality, longevity and oviposition. We compiled the best-fitted functions for each life stage to yield a phenology model, which we stochastically simulated to estimate the life table parameters. Three temperature-dependent indices (establishment, generation and activity) were derived from a logistic population growth model and then coupled to collected current (2013) and downscaled temperature data from AFRICLIM (2055) for geospatial mapping. To measure and predict the impacts of temperature change on the pest's biology, we mapped the indices along the altitudinal gradients of Mt. Kilimanjaro (Tanzania) and Taita Hills (Kenya) and assessed the differences between 2013 and 2055 climate scenarios. The optimal temperatures for development of DBM were 32.5, 33.5 and 33°C for eggs, larvae and pupae, respectively. Mortality rates increased due to extreme temperatures to 53.3, 70.0 and 52.4% for egg, larvae and pupae, respectively. The net reproduction rate reached a peak of 87.4 female offspring/female/generation at 20°C. Spatial simulations indicated that survival and establishment of DBM increased with a decrease in temperature, from low to high altitude. However, we observed a higher number of DBM generations at low altitude. The model predicted DBM population growth reduction in the low and medium altitudes by 2055. At higher altitude, it predicted an increase in the level of suitability for establishment with a decrease in the number of generations per year. If climate change occurs as per the selected scenario, DBM infestation may reduce in the selected region. The study highlights the need to validate these predictions with other interacting factors such as cropping practices, host plants and natural enemies.

  16. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  17. Thrust generation by a heaving flexible foil: Resonance, nonlinearities, and optimality

    NASA Astrophysics Data System (ADS)

    Paraz, Florine; Schouveiler, Lionel; Eloy, Christophe

    2016-01-01

    Flexibility of marine animal fins has been thought to enhance swimming performance. However, despite numerous experimental and numerical studies on flapping flexible foils, there is still no clear understanding of the effect of flexibility and flapping amplitude on thrust generation and swimming efficiency. Here, to address this question, we combine experiments on a model system and a weakly nonlinear analysis. Experiments consist in immersing a flexible rectangular plate in a uniform flow and forcing this plate into a heaving motion at its leading edge. A complementary theoretical model is developed assuming a two-dimensional inviscid problem. In this model, nonlinear effects are taken into account by considering a transverse resistive drag. Under these hypotheses, a modal decomposition of the system motion allows us to predict the plate response amplitude and the generated thrust, as a function of the forcing amplitude and frequency. We show that this model can correctly predict the experimental data on plate kinematic response and thrust generation, as well as other data found in the literature. We also discuss the question of efficiency in the context of bio-inspired propulsion. Using the proposed model, we show that the optimal propeller for a given thrust and a given swimming speed is achieved when the actuating frequency is tuned to a resonance of the system, and when the optimal forcing amplitude scales as the square root of the required thrust.

  18. Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.

    PubMed

    Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup

    2017-05-31

    Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.

  19. Predicting Pilot Performance in Off-Nominal Conditions: A Meta-Analysis and Model Validation

    NASA Technical Reports Server (NTRS)

    Wickens, C.D.; Hooey, B.L.; Gore, B.F.; Sebok, A.; Koenecke, C.; Salud, E.

    2009-01-01

    Pilot response to off-nominal (very rare) events represents a critical component to understanding the safety of next generation airspace technology and procedures. We describe a meta-analysis designed to integrate the existing data regarding pilot accuracy of detecting rare, unexpected events such as runway incursions in realistic flight simulations. Thirty-five studies were identified and pilot responses were categorized by expectancy, event location, and whether the pilot was flying with a highway-in-the-sky display. All three dichotomies produced large, significant effects on event miss rate. A model of human attention and noticing, N-SEEV, was then used to predict event noticing performance as a function of event salience and expectancy, and retinal eccentricity. Eccentricity is predicted from steady state scanning by the SEEV model of attention allocation. The model was used to predict miss rates for the expectancy, location and highway-in-the-sky (HITS) effects identified in the meta-analysis. The correlation between model-predicted results and data from the meta-analysis was 0.72.

  20. EPA Project Updates: DSSTox and ToxCast Generating New Data and Data Linkages for Use in Predictive Modeling

    EPA Science Inventory

    EPAs National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than tr...

  1. Predicting Metabolic Cost of Running with and without Backpack Loads

    DTIC Science & Technology

    1987-01-01

    would higher than generated by our prediction model. he more demanding for the cardiorespiratory sys - These differences, however, could be accounted...197) nabes te pedicionof rg cost ill women walking and running in %hoes and metabol cot aor walki) nande runin arditio af wieoots. I rgiinomics 29:439

  2. Predicting in vivo effect levels for repeat-dose systemic toxicity using chemical, biological, kinetic and study covariates.

    PubMed

    Truong, Lisa; Ouedraogo, Gladys; Pham, LyLy; Clouzeau, Jacques; Loisel-Joubert, Sophie; Blanchet, Delphine; Noçairi, Hicham; Setzer, Woodrow; Judson, Richard; Grulke, Chris; Mansouri, Kamel; Martin, Matthew

    2018-02-01

    In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was developed. Systemic effect levels were curated from ToxRefDB, HESS-DB and COSMOS-DB from numerous study types totaling 4379 in vivo studies for 1247 chemicals. Observed systemic effects in mammalian models are a complex function of chemical dynamics, kinetics, and inter- and intra-individual variability. To address this complex problem, systemic effect levels were modeled at the study-level by leveraging study covariates (e.g., study type, strain, administration route) in addition to multiple descriptor sets, including chemical (ToxPrint, PaDEL, and Physchem), biological (ToxCast), and kinetic descriptors. Using random forest modeling with cross-validation and external validation procedures, study-level covariates alone accounted for approximately 15% of the variance reducing the root mean squared error (RMSE) from 0.96 log 10 to 0.85 log 10  mg/kg/day, providing a baseline performance metric (lower expectation of model performance). A consensus model developed using a combination of study-level covariates, chemical, biological, and kinetic descriptors explained a total of 43% of the variance with an RMSE of 0.69 log 10  mg/kg/day. A benchmark model (upper expectation of model performance) was also developed with an RMSE of 0.5 log 10  mg/kg/day by incorporating study-level covariates and the mean effect level per chemical. To achieve a representative chemical-level prediction, the minimum study-level predicted and observed effect level per chemical were compared reducing the RMSE from 1.0 to 0.73 log 10  mg/kg/day, equivalent to 87% of predictions falling within an order-of-magnitude of the observed value. Although biological descriptors did not improve model performance, the final model was enriched for biological descriptors that indicated xenobiotic metabolism gene expression, oxidative stress, and cytotoxicity, demonstrating the importance of accounting for kinetics and non-specific bioactivity in predicting systemic effect levels. Herein, we generated an externally predictive model of systemic effect levels for use as a safety assessment tool and have generated forward predictions for over 30,000 chemicals.

  3. Physics-based model for predicting the performance of a miniature wind turbine

    NASA Astrophysics Data System (ADS)

    Xu, F. J.; Hu, J. Z.; Qiu, Y. P.; Yuan, F. G.

    2011-04-01

    A comprehensive physics-based model for predicting the performance of the miniature wind turbine (MWT) for power wireless sensor systems was proposed in this paper. An approximation of the power coefficient of the turbine rotor was made after the turbine rotor performance was measured. Incorporation of the approximation with the equivalent circuit model which was proposed according to the principles of the MWT, the overall system performance of the MWT was predicted. To demonstrate the prediction, the MWT system comprised of a 7.6 cm thorgren plastic propeller as turbine rotor and a DC motor as generator was designed and its performance was tested experimentally. The predicted output voltage, power and system efficiency are matched well with the tested results, which imply that this study holds promise in estimating and optimizing the performance of the MWT.

  4. Effect of aerodynamic detuning on supersonic rotor discrete frequency noise generation

    NASA Technical Reports Server (NTRS)

    Hoyniak, D.; Fleeter, Sanford

    1988-01-01

    A mathematical model was developed to predict the effect of alternate blade circumferential aerodynamic detuning on the discrete frequency noise generation of a supersonic rotor. Aerodynamic detuning was shown to have a small beneficial effect on the noise generation for reduced frequencies less than 3. For reduced frequencies greater than 3, however, the aerodynamic detuning either increased or decreased the noise generated, depending on the value of the reduced frequency.

  5. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Differential Impact of Serial Measurement of Nonplatelet Thromboxane Generation on Long-Term Outcome After Cardiac Surgery.

    PubMed

    Kakouros, Nikolaos; Gluckman, Tyler J; Conte, John V; Kickler, Thomas S; Laws, Katherine; Barton, Bruce A; Rade, Jeffrey J

    2017-11-02

    Systemic thromboxane generation, not suppressible by standard aspirin therapy and likely arising from nonplatelet sources, increases the risk of atherothrombosis and death in patients with cardiovascular disease. In the RIGOR (Reduction in Graft Occlusion Rates) study, greater nonplatelet thromboxane generation occurred early compared with late after coronary artery bypass graft surgery, although only the latter correlated with graft failure. We hypothesize that a similar differential association exists between nonplatelet thromboxane generation and long-term clinical outcome. Five-year outcome data were analyzed for 290 RIGOR subjects taking aspirin with suppressed platelet thromboxane generation. Multivariable modeling was performed to define the relative predictive value of the urine thromboxane metabolite, 11-dehydrothromboxane B 2 (11-dhTXB 2 ), measured 3 days versus 6 months after surgery on the composite end point of death, myocardial infarction, revascularization or stroke, and death alone. 11-dhTXB 2 measured 3 days after surgery did not independently predict outcome, whereas 11-dhTXB 2 >450 pg/mg creatinine measured 6 months after surgery predicted the composite end point (adjusted hazard ratio, 1.79; P =0.02) and death (adjusted hazard ratio, 2.90; P =0.01) at 5 years compared with lower values. Additional modeling revealed 11-dhTXB 2 measured early after surgery associated with several markers of inflammation, in contrast to 11-dhTXB 2 measured 6 months later, which highly associated with oxidative stress. Long-term nonplatelet thromboxane generation after coronary artery bypass graft surgery is a novel risk factor for 5-year adverse outcome, including death. In contrast, nonplatelet thromboxane generation in the early postoperative period appears to be driven predominantly by inflammation and did not independently predict long-term clinical outcome. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  7. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualini, Donatella

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less

  8. Refining Prediction in Treatment-Resistant Depression: Results of Machine Learning Analyses in the TRD III Sample.

    PubMed

    Kautzky, Alexander; Dold, Markus; Bartova, Lucie; Spies, Marie; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Fabbri, Chiara; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried

    The study objective was to generate a prediction model for treatment-resistant depression (TRD) using machine learning featuring a large set of 47 clinical and sociodemographic predictors of treatment outcome. 552 Patients diagnosed with major depressive disorder (MDD) according to DSM-IV criteria were enrolled between 2011 and 2016. TRD was defined as failure to reach response to antidepressant treatment, characterized by a Montgomery-Asberg Depression Rating Scale (MADRS) score below 22 after at least 2 antidepressant trials of adequate length and dosage were administered. RandomForest (RF) was used for predicting treatment outcome phenotypes in a 10-fold cross-validation. The full model with 47 predictors yielded an accuracy of 75.0%. When the number of predictors was reduced to 15, accuracies between 67.6% and 71.0% were attained for different test sets. The most informative predictors of treatment outcome were baseline MADRS score for the current episode; impairment of family, social, and work life; the timespan between first and last depressive episode; severity; suicidal risk; age; body mass index; and the number of lifetime depressive episodes as well as lifetime duration of hospitalization. With the application of the machine learning algorithm RF, an efficient prediction model with an accuracy of 75.0% for forecasting treatment outcome could be generated, thus surpassing the predictive capabilities of clinical evaluation. We also supply a simplified algorithm of 15 easily collected clinical and sociodemographic predictors that can be obtained within approximately 10 minutes, which reached an accuracy of 70.6%. Thus, we are confident that our model will be validated within other samples to advance an accurate prediction model fit for clinical usage in TRD. © Copyright 2017 Physicians Postgraduate Press, Inc.

  9. Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2014-01-01

    Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions.

  10. Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2014-01-01

    Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions

  11. Parental SES, Communication and Children’s Vocabulary Development: A 3-Generation Test of the Family Investment Model

    PubMed Central

    Sohr-Preston, Sara L.; Scaramella, Laura V.; Martin, Monica J.; Neppl, Tricia K.; Ontai, Lenna; Conger, Rand

    2012-01-01

    This 3-generation, longitudinal study evaluated a family investment perspective on family socioeconomic status (SES), parental investments in children, and child development. The theoretical framework was tested for first generation parents (G1), their children (G2), and for the children of the second generation (G3). G1 SES was expected to predict clear and responsive parental communication. Parental investments were expected to predict educational attainment and parenting for G2 and vocabulary development for G3. For the 139 families in the study, data were collected when G2 were adolescents and early adults and their oldest biological child (G3) was 3–4 years of age. The results demonstrate the importance of SES and parental investments for the development of children and adolescents across multiple generations. PMID:23199236

  12. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    NASA Technical Reports Server (NTRS)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    A rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for a curved orthogrid panel typical of launch vehicle skin structures. Several test article configurations were produced by adding component equipment of differing weights to the flight-like vehicle panel. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was employed to describe the assumed correlation of phased input sound pressures across the energized panel. This application demonstrates the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software modules developed for the RPTF method can be easily adapted for quick replacement of the diffuse acoustic field with other pressure field models; for example a turbulent boundary layer (TBL) model suitable for vehicle ascent. Wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this type of environment. Finally, component vibration environments for design were developed from the measured and predicted responses and compared with those derived from traditional techniques such as Barrett scaling methods for unloaded and component-loaded panels.

  13. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    NASA Technical Reports Server (NTRS)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for component-loaded curved orthogrid panels typical of launch vehicle skin structures. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was applied to correlate the measured input sound pressures across the energized panel. This application quantifies the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software developed for the RPTF method allows easy replacement of the diffuse acoustic field with other pressure fields such as a turbulent boundary layer (TBL) model suitable for vehicle ascent. Structural responses using a TBL model were demonstrated, and wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this environment. Finally, design load factors were developed from the measured and predicted responses and compared with those derived from traditional techniques such as historical Mass Acceleration Curves and Barrett scaling methods for acreage and component-loaded panels.

  14. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.

    PubMed

    Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.

  15. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform

    PubMed Central

    Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150

  16. Evaluation of runoff prediction from WEPP-based erosion models for harvested and burned forest watersheds

    Treesearch

    S. A. Covert; P. R. Robichaud; W. J. Elliot; T. E. Link

    2005-01-01

    This study evaluates runoff predictions generated by GeoWEPP (Geo-spatial interface to the Water Erosion Prediction Project) and a modified version of WEPP v98.4 for forest soils. Three small (2 to 9 ha) watersheds in the mountains of the interior Northwest were monitored for several years following timber harvest and prescribed fires. Observed climate variables,...

  17. Predicting the Spatial Distribution of Aspen Growth Potential in the Upper Great Lakes Region

    Treesearch

    Eric J. Gustafson; Sue M. Lietz; John L. Wright

    2003-01-01

    One way to increase aspen yields is to produce aspen on sites where aspen growth potential is highest. Aspen growth rates are typically predicted using site index, but this is impractical for landscape-level assessments. We tested the hypothesis that aspen growth can be predicted from site and climate variables and generated a model to map the spatial variability of...

  18. Prediction of energy balance and utilization for solar electric cars

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    Solar irradiation and ambient temperature are characterized by region, season and time-domain, which directly affects the performance of solar energy based car system. In this paper, the model of solar electric cars used was based in Xi’an. Firstly, the meteorological data are modelled to simulate the change of solar irradiation and ambient temperature, and then the temperature change of solar cell is calculated using the thermal equilibrium relation. The above work is based on the driving resistance and solar cell power generation model, which is simulated under the varying radiation conditions in a day. The daily power generation and solar electric car cruise mileage can be predicted by calculating solar cell efficiency and power. The above theoretical approach and research results can be used in the future for solar electric car program design and optimization for the future developments.

  19. DEM generation and tidal deformation detection for sulzberger ice shelf, West Antarctica using SAR interferometry

    USGS Publications Warehouse

    Baek, S.; Kwoun, Oh-Ig; Bassler, M.; Lu, Z.; Shum, C.K.; Dietrich, R.

    2004-01-01

    In this study we generated a relative Digital Elevation Model (DEM) over the Sulzberger Ice Shelf, West Antarctica using ERS1/2 synthetic aperture radar (SAR) interferometry data. Four repeat pass differential interferograms are used to find the grounding zone and to classify the study area. An interferometrically derived DEM is compared with laser altimetry profile from ICESat. Standard deviation of the relative height difference is 5.12 m and 1.34 m in total length of the profile and at the center of the profile respectively. The magnitude and the direction of tidal changes estimated from interferogram are compared with those predicted tidal differences from four ocean tide models. Tidal deformation measured in InSAR is -16.7 cm and it agrees well within 3 cm with predicted ones from tide models.

  20. The Generation of Harmonic Distortion and Distortion Products in a Computational Model of the Cochlea

    NASA Astrophysics Data System (ADS)

    Meaud, Julien; Li, Yizeng; Grosh, Karl

    2011-11-01

    It is generally agreed that the nonlinear response of the cochlea is due to the forward transduction of the outer hair cell (OHC) hair bundle (HB) and subsequent alteration of the active force applied to the cochlear structures, including the basilar membrane (BM). A mechanical-acoustical-electrical model of the cochlea with three-dimensional fluid representation, and feedback from OHC somatic motility coupled to nonlinear HB mechanotransduction is used to predict nonlinear distortion of the BM response to acoustic stimulus. An efficient alternating frequency time scheme is implemented to solve for the nonlinear stationary dynamics of the cochlea. The model is used to predict the location of maximum generation of nonlinear distortion during pure tone and two-tone stimulation as well as the propagation of the distortion components on the BM.

Top